Oh right, I forgot to fix that. This happens on Python 3.6 or older, so can you
try 3.7 or 3.8?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/run-test-cutlass-py-error-unexpected-keyword-argument-capture-output/11400/2)
to respond.
You are receiving this because you enabled mailing
Like I mentioned op name is defined in the TOPI compute, so auto-scheduler
cannot change it. To solve this issue, we need to introduce a general utility
to rename op names in a TE compute.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/print-auto-schedule-python-schedule-with-topi-op/
When I run tests/python/contrib/test_cutlass.py, error:
FAILED ../tests/python/contrib/test_cutlass.py::test_dense - TypeError:
__init__() got an unexpected keyword argument 'capture_output'
FAILED ../tests/python/contrib/test_cutlass.py::test_dense_bias - TypeError:
__init__() got an unexpec
Perhaps adding a renaming logic
[here](https://github.com/apache/tvm/blob/main/src/auto_scheduler/compute_dag.cc#L1206)
may not work because there are just the print of `... = tuple(name.op.axis) +
tuple(name.op.reduce_axis)`, but the following steps in [step
print](https://github.com/apache/
Yes that is correct. Though I believe someone was planning to work on this one
in the next week.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/toppattern-has-not-been-registered-for-nn-dropout/11305/9)
to respond.
You are receiving this because you enabled mailing list mode.
To uns
Hi Andrew,
One more question, is similar behavior also made for the bn layer? Says it was
fused to conv layer and does not have an actual implementation.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/toppattern-has-not-been-registered-for-nn-dropout/11305/8)
to respond.
You are rec
Notice one very interesting one that might be helpful for this bug
I tried to compile the models exported from mxnet
```
from tvm import relay
import tvm
import numpy as np
import torch
import torch as th
import torch.nn as nn
from torchvision import models
import torch.onnx
from tvm import
[quote="lhutton1, post:3, topic:11394"]
elegant
[/quote]
Oh! Thanks for the information. You saved my day.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-failed-to-build-models-exported-from-pytorch/11394/5)
to respond.
You are receiving this because you enabled mailing list mo
Hi @Lyken17,
I also ran into this issue recently. It turned out to be conflicting symbols
between PyTorch and TVM, see
https://github.com/apache/tvm/issues/9362#issuecomment-955263494 for the
resolution. Alternatively, a quicker (but less elegant) solution is to import
`torch` before `tvm`.
My environemnt
ubuntu 20.04 | gcc: 9.3 | llvm: 10.0 | nvcc: 11.1
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-failed-to-build-models-exported-from-pytorch/11394/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [cl
Hi there,
While I was following the tutorial [relay quick
start](https://tvm.apache.org/docs/tutorial/relay_quick_start.html), I tried to
load a module from pytorch but it raises segmentation fault error. The TVM I am
using the latest commit `bff98843bef9a312587aaff51b679d9b69a7d5a7` and the
Is there a way to mark a layer as an LSTM layer after conversion to relay? I
see almost RNN ops as broken into individual cells for each operator.
I have a custom accelerator with First class support for LSTM at the operator
level. Is there an easy way to do this ?
Thanks in advance.
--
12 matches
Mail list logo