Here's a list of fixes we applied to v0.6 branch. I will cut a tag this Friday.
* Fixed process termination routine in windows #4844
* [Runtime] Fix NDArray SaveDLTensor declaration and implementation signature
different #4586
* [NODE][Serialization]fix serialization precision loss in float #45
Thanks for pointing out. I'll remove accordingly.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/8) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/un
Here is a list of bug fixes we're going to apply on v0.6 branch, please let me
know if I missed anything.
* [RELAY] bugfix. #2215
* [Graph Tuner] Fix benchmark layout in graph tuner #3926
* [VTA] Parameterization and bug fix in TensorLoad module #3841
* [VTA] Fix TSIM compile error in Linux (ad
This is a proposal to do a minor (bugfix) release of v0.6, aka v0.6.1. Commits
will be cherry-picked to v0.6.1 branch. We follow the standard [Apache release
process](https://tvm.apache.org/docs/contribute/release_process.html)
I will go through the commits history to get a list of bug fixing
This is a good suggestion. If you find any bug fixes missing in our monthly dev
report, also feel free to point out, this would help realize the work in our
release notes later.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-improve-pull-requests-with-respect-to-bug-fixes/6529/3)
to resp
I believe @kevinthesun and @haichen are working on that.
---
[Visit
Topic](https://discuss.tvm.ai/t/whether-tvm-will-support-dynamic-shapes-in-the-future/3700/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](http
maybe I'm missing some context. would you mind give an example?
---
[Visit
Topic](https://discuss.tvm.ai/t/discussion-adding-a-function-to-relay-module-automatically-triggers-infertype/3643/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from t
I agree compile techniques can be used to optimize "add". and for long term
mxnet can adopt such optimization.
but let's focus on how to support current use case. It totally makes sense
that, because of the previous reason, we'd like to use option 1, while I'm
wondering whether it has any pro
polyhedral optimization (or at least the ability to easily apply
polyhedral-like analysis) might be attractive for ASICs though, it could help
to build a smarter tensorizer.
---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/22)
to respond.
You are receiving t
`LOG(INFO) << oshape;` ?
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-debug-and-print-out-content-of-indexexpr/2039/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/b
10 matches
Mail list logo