The `nn.dense` & `nn.batch_matmul` have a same history I think. So recently
I've added extra attrs to them to support the input tensor in transposed or
non-transposed format.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-frontend-why-transpose-before-gathernd/10657/3)
to respo
In my mind, this part in Ansor is almost similar to AutoTVM, I'm not sure if
this has been explained in these two papers.
Recently there's also another work about the cost model of Ansor:
https://openreview.net/forum?id=aIfp8kLuvc9
cc @merrymercy
---
[Visit
Topic](https://discuss.tvm.ap
Help to at if anyone have experience on this
@tqchen @junrushao1994 @comaniac @FrozenGene :smiley:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/multithread-threadpool-performance-degradation-when-running-relay-module-in-multiple-threads/10374/2)
to respond.
You are receiving this b
That will be really great!
I would like to help fix these if I have time. I've just added a `nn.matmul` to
support the input tensors in transposed or non-transposed format.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-nn-does-relay-nn-dense-supports-multi-dimensional-input/103
Emm ... This seems to be a flaky problem.
The op representation of `dense` in relay support multi-dim(exp. doc string,
shape functions), while the current computation(exp. `topi.nn.dense`) does not.
I guess that dense op is desinged to support multi-dim, but guys only added
simpler computatio
Dynamic shape support has been an important topic for TVM for a long time.
Currently VM is the only way to process a dynamic model.
For the AutoScheduler, we've had many discussions about it while there's no
perfect approach to solve such problem now.
p.s. @comaniac may have some experience o
[quote="yulongl, post:1, topic:10078"]
https://tvm.apache.org/docs/tutorials/get_started/auto_tuning_with_python.html#sphx-glr-tutorials-get-started-auto-tuning-with-python-py
[/quote]
What's the 'target' you use when testing with tvm? In your i9 CPU, you can try
with:
```python
target = "llvm
Actually Ansor has "local" memory in some special case.
The two level cache read structure has been tried at the beginning when we
built the Ansor system. And It's still easy for Ansor to add such sketches in
the current main branch.
Ansor is a tuning based schedule search system, which means
Sorry for that. Though we have a strong will to support TensorCore in Ansor,
currently I don't have extra bandwidth to work on this topic.
As far as I know, some guys are working on the new TensorIR, and based on
which, TVM will get a new infrastructure to combine the current AutoTVM and
Auto
@junrushao1994 Yeah I see, but seems we're not yet able to lower & build a TIR
module in the master branch now? :laughing:
(Maybe I can have a try on the tensorir private branch...)
@FrozenGene I agree.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/do-we-have-any-way-to-process-codeg
Thanks! We have not tried such ops before, this case seems interesting.
I'll try your code and figure out where the possible bug occurs.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/auto-scheduling-for-lstm-operator/8158/2)
to respond.
You are receiving this because you enabled mai
The custom sketch rule support is ready in our develop branch, while the final
user interface for it has not been decided yet.
If this is important for you, we can consider to upstream a experimental
version of custom sketch.
cc @comaniac @merrymercy
---
[Visit
Topic](https://discuss.tv
Ansor tuning for end to end model support is ready in our develope branch, the
upstreaming is on its way. :smiley:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/does-current-auto-scheduler-support-gpu/7660/5)
to respond.
You are receiving this because you enabled mailing list mode.
I guess you've used RandomTuner or GridTuner, which traverses the whole search
space randomly or in sequence.
As for the ML part of the AutoTVM, it means to use the XGBTuner in the current
code base. With which, AutoTVM extracts features from a given schedule and uses
a XGBoost model to predi
cc @FrozenGene, who may have more experiences on OpenCL.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-opencl-context-how-to-choose-device-type-as-accelerator/7821/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [cli
15 matches
Mail list logo