The fundamental problem is that (pre-compiled) PyTorch python modules use the
pre C++-11 string ABI to better blend into the Python ecosystem or so. TVM does
not, so it needs to link to LibTorch with the "new" C++ string ABI.
But these two versions clash.
One option is to use self-compiled PyTor
Thank you @masahi !
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1063277207
You are receiving this because you are subscribed to this thread.
Message ID:
Are you on the tvm discord or so to quickly discuss?
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1061504189
You are receiving this because you are subscribed to this thread.
Message ID:
Hi, so I rebased this finally and it all compiles and runs one test against a
current PyTorch master, so I think I'm back in business with this PR (unless it
has been obsoleted, but from what I understand, the bridge is in the other
direction).
--
Reply to this email directly or view it on Gi
M. Ruberry of the PyTorch team re-landed the update of the dlpack.h in PyTorch.
If this still holds next week, it'll be exciting to bring this up to date. :)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1018688381
You are receiving th
So I thought, I could wait it out, but I'll look into working around the
version discrepancy in the next few weeks.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-1008961724
You are receiving this because you are subscribed to this thre
@masahi So I had hoped to get the dlpack header version in PyTorch bumped (see
the linked bug) but Facebook has internal uses that let it insist on the old
one.
I wonder if we could work around it by providing a "dlpack-compat" header that
defines the names for the types / fields? Or I could try
Just a quick note that when I tried to revive this back in the summer it got a
bit stalled around pytorch/pytorch#65047 .
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7401#issuecomment-9
So I have been mulling over the best granularity / approach. Currently I'm
taking TorchScript functions / graphs as the unit I'm working with. An
alternative could be to move to the PyTorch operator level (so one
aten::...-call) - which would seem to be more natural in Relay - but then one
woul
I wonder whether this would make the torch fallback op
(https://github.com/apache/tvm/pull/7401) more or less useful (it would depend
on what you (plan to) do with unsupported ops). I am still pondering whether to
close it or dust it off.
I should note that as far as I know NVidia has a TensorR
> I would really appreciate getting at least your fix to solve this issue
> merged into upstream. Maybe in a separate PR at this is not really related to
> the TorchScript use case.
I'm all for it, but I wouldn't know how to add tests in lieu of something using
it. If you or @masahi has any opi
Yeah, the general idea is to use this as the fallback. I can add the fallback
generation here in the PR if that is better.
Also I added a bit of a pro-con discussion regarding single op vs. program on
the forum thread, if you have opinions, I'd be very grateful if you could chime
in.
--
You ar
> I'm curious how it integrates with PyTorch frontend. Do we convert every op
> not supported to relay.torchop, run BYOC flow to get TorchScript subgraphs,
> and send them to libtorch? Sounds interesting!
This is how I'd like it to work out. I've been thinking what the best "level"
is and while
This patch adds a support for calling TorchScript. This can be used fallback
for when torch operators are not yet implemented or if one wants to incorporate
bespoke PyTorch custom ops into TVM with ease.
It adds
- a new relay `torchop` that takes a variable number of inputs and executes a
provi
Seems good to me. If we are giving up on pre-3.3 compat, I should also remove
the code object v3 workaround I introduced in the spring in favour of 3.5+.
(I'll send a PR.)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
ht
15 matches
Mail list logo