Yes, different autoTVM/autoscheduler runs on the same network can yield
different implementations.
If you know your model doesnt change, you would autoschedule once and save the
log files of the optimized implementation. If for some reason you need to
recompile your model you would tell TVM t
@manupa-arm @matt-arm
So maybe I can ask more directly.
Is the ordering of first pattern matching your offloadable and then replacing,
within the extracted composite, the native relay operators with your new
ethosu.conv2d relay operator a solution to not being able to do what I said
before
Hello,
I was looking at the Arm EthosU integration in TVM and [noticed that there was
a new conv2d Relay operator
defined](https://github.com/apache/tvm/blob/main/python/tvm/relay/backend/contrib/ethosu/op/convolution.py#L185).
Obviously this operator is only legal/valid for offloading onto t
Just inline the one stage into the other one?
EDIT:
wait your if statements require variables which are not defined (blockIdx.x
andThreadIdx.x)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ops-become-slow-when-using-te-var/11486/4)
to respond.
You are receiving this because you en
I guess what you are seeing are potential cases when the variable can have any
value in the range of the data type you selected and therefore for correctness
all those if statements are necessary?
Any reason you want to use te.var ? I am guessing due to some dynamic shape you
want to support,
Hey,
Take this with a grain of salt since I am not an official voice of the people
who developed those things.
The `compute` should be to register the naive/non-optimize/non-hwdependent
computation rule of a Relay operator. You can think about it as a golden
reference.
Now, there can be man
Thanks for your response, but I still dont understand what the solution is...
except for decomposing the tuple into elementary arguments.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/passing-a-tuple-to-a-packed-func/9117/3)
to respond.
You are receiving this because you enabled mai
@eric-haibin-lin @ziheng any thoughts? I saw you were involved in
[PR6079](https://github.com/apache/tvm/pull/6079) which seems to have been the
last modifications to `te.extern`
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tiled-tensors-for-external-python-functions-and-tir-te-ext
Hello everyone,
I was wondering if there is a way to pass a tuple (or array) directly to a
packed func.
```
tpl = tuple(1,2,3)
tvm.tir.call_packed("func_name", tpl)
```
Reading the [packed_func
documentation](https://tvm.apache.org/docs/dev/runtime.html?highlight=packed_func#packedfunc),
I
I also +1 this feature.
It seems that what you want to do is very similar to a previous post I had
https://discuss.tvm.apache.org/t/relay-pattern-replacing-with-custom-relay-ops/7531/13
The biggest fear I have of introducing a new Relay operator (which I guess is
what would eventually happen
Hello,
In the [PassInfra Design and Development
Doc](https://tvm.apache.org/docs/dev/pass_infra.html#pass-objects) the
`function_pass` decorator is briefly explained.
In the python codebase, there is also the `class FunctionPass`, objects of this
type should be created with `function_pass`. I
11 matches
Mail list logo