i pulled the newest code, and i can get the data now.
But it still has the same problem,
" Downcast from relay.RefType to relay.TensorType failed."
---
[Visit
Topic](https://discuss.tvm.ai/t/relay-concatenate-missing-data-when-development-the-gradient-of-relay-concatenate/3595/2)
to respo
> I’m wondering whether it has any problem such as what @junrushao1994
> mentioned.
If there are unknown alias/unknown add_to in other places of the code, it
cannot be model as option 1. Let's hope it doesnt happend.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-implement-add-to-semanti
If I understand correctly, `add_to(a, b)` increment a, with b. During this
process, the value of a will be changed.
The second approach is a, imho, RED FLAG idea, that I dont think we should do.
If add_to is implemented as above, it will greatly complicate Operator Fusion,
Gradient, Partial Eva
I agree compile techniques can be used to optimize "add". and for long term
mxnet can adopt such optimization.
but let's focus on how to support current use case. It totally makes sense
that, because of the previous reason, we'd like to use option 1, while I'm
wondering whether it has any pro
> @jianyuh please act on the review comments @were please
> https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly
> @jianyuh please act on the review comments @were please
> https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly
I
> If we have time, we could investigate why we couldn't achieve 252GFlops even
> more. Only 73% hardware efficiency means we have much work could dive.
252 Gops/s is a reasonable number as this is ~90% hardware efficiency.
Currently FBGEMM and MKL-DNN can reach this number. For the current PR, t
Is there any update in this thread? Please provide a pointer or roadmap if TVM
already (or try to) support tensor core feature. Appreciate it.
---
[Visit
Topic](https://discuss.tvm.ai/t/implement-conv2d-using-tensor-core/1262/9) to
respond.
You are receiving this because you enabled mail
Closed #3708.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3708#event-2534251479
@tqchen thanks for clarifying. I thought it was missed since the committer and
PMC distinction is common in other places. I will close the PR now.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/p
Do we need such kind of operator for optimizer, or we have better alternatives?
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-implement-add-to-semantic-in-tvm/3618/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here
the original community docs puts PMC in the committer section as PMCs are part
of the committer groups. i would recommend to keep the organization as it is.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.co
While admittedly say add_to is used in frameworks of the previous generation,
let’s discuss about this: Do we really need it? Relay’s gradient pass just
produces gradients without mutation and it looks perfectly fine, so in which
case we have to rely on mutation?
Another thing that we could t
12 matches
Mail list logo