+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8928#issuecomment-917238307
Thanks @xiebaiyuan .
On the wasm binary pulling. We might be able to reuse
the new RPC session constructor(via websocket) that directly reconstructs
from the wasm binary, so file exchange may not be needed
https://github.com/apache/tvm/blob/main/web/tests/python/webgpu_rpc_test.py#L59
Would
I wanted to propose as a highlight the TE-level auto-differentiation work, lead
by @yzhliu, which unlocks TE-level training capability in the TVM stack
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apach
Agree with @leandron that we could firstly refer to the items there. Many
"initial" features in v0.7 are now stable. For example:
* Initial automatic scheduling support -> stable.
* Initial command line driver interface -> stable.
* Intial Hexagon support -> stable.
* Bring your own codegen (BYOC
Thanks for the work. I believe v0.8 is a good chance to land TensorIR
scheduling (https://github.com/apache/tvm/issues/7527). Also, I will try my
best to contribute some initial TensorIR tutorials and documentations before
the v0.8 release.
--
You are receiving this because you are subscribed
> (IRModule, Function) -> (IRModule, GlobalVar)
I'm still is favor of this signature since it's a cheep and cheerful way to
ensure we don't end up with N ways to implement the lower-and-rewrite-calls
machinery embedded in te_complier. I think my earlier question still stands:
> Have you tried out
Thanks for the great work, a minor note on rendering. it would be great to add
the codeblock type to the code so we have syntax highlights
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/25#issuecomment-9
Thanks @MeeraN7 . Yes I get what you mean. Right now we are adding a
"is_scalable" field to indicate that the broadcast and ramp are "context
dependent" on VL. Additionally, we might need to update DataType to indicate a
scalable data type.
This context dependency is the missing information I m
Hi @tqchen, thank you for the comment. To clarify a few things, VL is not added
to the Ramp Node at all, it is simply a string that is used when printing TIR
for visual representation. The only addition to the Ramp Node (and also
Broadcast Node) is a boolean called "is_scalable" which should not
@tqchen @junrushao1994
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/32#issuecomment-916734287
Add web assembly autotvm support for TVM,
The related pull request will be pushed soon, which will be finished in a week.
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm-rfcs/pull/32
-- Commit Summary --
* RPC OF rfcs/0021-add_web_assembly_autot
11 matches
Mail list logo