`tvm.script` looks good to me.
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe/542
I like the idea of `tvm.script` as the intention is to have the script to be
able to represent both relay and tir module collectively.
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/4)
to respond.
You are receiving this because you enabled mailing list mo
Interestingly, compiling faster rcnn and mask rcnn from PyTorch, enabled by the
PR https://github.com/apache/incubator-tvm/pull/6449, takes less than 3 min on
my laptop. I wonder where the difference in compilation time between TF and
PyTorch comes from.
---
[Visit
Topic](https://discuss
:heart_eyes: :star_struck: Thank you for proposing this! Completely agreed that
we should disambiguate.
Having `tvm.script` and `te.hybrid.script` is an improvement, but I still think
there is far more overlap than ideal between the two.
What about `pytir`? Then it opens the door to other emb
CC: @spectrometerHBH @Hzfengsy @were
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscri
## Current issue
TVM current has two different hybrid scripts: `te.hybrid.script` and
`tvm.hybrid.script`. This leads to confusion as both scripts are similar but
share different use cases and properties. This is especially confusing for new
users as hybrid script can refer to either of these
I have opened a gist for monthly reports from last v0.6 release:
https://gist.github.com/ZihengJiang/6d3440ec22852dc9baae2e3f278ad8b4
We can start organizing the monthly reports to the release note template.
@zhiics @tqchen
--
You are receiving this because you are subscribed to this thread.
# New Features
## Accelerator
## Ansor
## Frontend and User Interface
## Relay
## Runtime
## TIR
## Quantization
# Feature Improvement
# Performance Improvements
# Documentation
# Build and Tests
# Bug Fixes
# Known Issues
# Deprecation
--
You are receiving this because you are subs
The wheels are built on a newer version of CentOS. Pip wheel for CPU is
manylinux2010 compatibility and wheels for CUDA are manylinux2014
compatibility. Releasing wheels with different CUDA versions is to accommodate
different develop and deploy environment.
---
[Visit
Topic](https://dis
Thanks for the clarification! I concur that such a primitive should be useful
and would allow more flexible compute movements.
Regarding the full graph, I agree that relay (along with optimization) being
very useful. I was thinking whether there would be a benefit of lowering the
full graph t
for concat, we could introduce a reverse inlining primitive that inlines
elemenwise operations(after concat) back to the concat, which should be helpful
in many cases.
While it is possible to represent a full graph, we would still imagine relay
being super useful as a coarse grained repr for
Understood. Thanks @tqchen!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tlcpack-thirdparty-binary-packages/7903/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/uns
I'd like to see the tvmc work to be functional before we cut the release
please. I'd also like to see the Ethos-N PRs land as well.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/iss
The pypi was due to pypi file size limit (the cuda binary size was quite big).
We can move to pypi once the pypi file size limit request is approved. Scripts
are available https://github.com/tlc-pack/tlcpack
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tlcpack-thirdparty-binary-
Thanks for the proposal! Looks quite interesting!
Out of curiosity,
1) The concat example you've shown where the original stage is represented in
three blocks that seems to be assigning to the same buffer. I'm curious to know
what if we want to move the concat (using compute_at, if possible
Technically, it should support. However, due to time constraints, we have not
yet supported.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/25)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from th
Thanks for this explanation. I'm interested if it might be possible to match
tensor intrinsics with variable size? For example, Arm SVE introduces vector
instructions of variable size.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/24)
to re
Thank you for your interest.
Tensorize in TensorIR is completely different from the TE ones. In TensorIR, we
use two functions (desc_func and intrin_func) to define an intrinsic. Here
would be an example of intrinsic (Note that TensorIR is still WIP, so the API
may be changed).
```python
@
Thanks for this RFC, I think it's a great idea and will help solve a number of
issues I've been facing recently. I'm particularly interested in what
'tensorize' will look like for this new IR. Could you give a snippet as an
example?
I'm also interested in what the interaction of this will be
Thanks @haichen and @tqchen! This is really cool, and new users will certainly
benefit from that.
I'm curious to understand why we are (I assume) self-hosting, rather than using
pypi. Is this due to ASF licensing rules as well?
Also, are the scripts and parameters you're using to generate the
20 matches
Mail list logo