[Apache TVM Discuss] [Questions] How to set cmake.config to let TVM support spirv?

2022-06-16 Thread masahi via Apache TVM Discuss
Usually, `USE_VULKAN=ON` should be enough to enable spirv codegen and vulkan runtime. --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-set-cmake-config-to-let-tvm-support-spirv/12971/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe fro

[Apache TVM Discuss] [Application] Will TVM support JAX?

2022-06-15 Thread masahi via Apache TVM Discuss
I think that would be an interesting project, and something that is entirely feasible. But personally, since developing a frontend requires significant effort, we have good support for PyTorch, and PyTorch is increasing adding JAX-inspired feature, I'd rather improve our support for PyTorch.

[Apache TVM Discuss] [Questions] How to deal with prim::DictConstruct

2022-06-14 Thread masahi via Apache TVM Discuss
You can try something like https://github.com/masahi/tvm-cutlass-eval/blob/master/bert/export.py#L26-L33 --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-deal-with-prim-dictconstruct/11978/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscri

[Apache TVM Discuss] [Questions] [BYOC] How backwards compatible does the TensorRT partition_for_tensorrt function need to be?

2022-06-14 Thread masahi via Apache TVM Discuss
cc @comaniac @Laurawly --- [Visit Topic](https://discuss.tvm.apache.org/t/byoc-how-backwards-compatible-does-the-tensorrt-partition-for-tensorrt-function-need-to-be/12957/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click

[Apache TVM Discuss] [Questions] Intution on why this int8 algorithm is slower?

2022-06-07 Thread masahi via Apache TVM Discuss
Maybe the slowdown is due to int16 fallback? Or, since you modified the compute, the "right" schedule may not be getting called. --- [Visit Topic](https://discuss.tvm.apache.org/t/intution-on-why-this-int8-algorithm-is-slower/12920/2) to respond. You are receiving this because you enable

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-24 Thread masahi via Apache TVM Discuss
Looks like `SimplifyExpr` doesn't support folding `bias_add` and `bias`, see https://github.com/apache/tvm/blob/6942b3660df3551a3a9a86c2faba834d366a2a7e/src/relay/transforms/simplify_expr.cc#L651-L652. So both of cases don't work unless you modify that pass. But I recommend not depending on `b

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread masahi via Apache TVM Discuss
Note that you are using `SimplifyInfernce` twice, but you want to replace the second one with `SimplifyExpr`. But right, it seems `bias_add` and `add` are not folded. It seems `relay.transform.CanonicalizeOps()` converts `bias_add` to `add`, so you want to call it before `SimplifyExpr`. I tri

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread masahi via Apache TVM Discuss
Having `add` there is expected since batch norm has a shift by constant. But the idea is that the new `add` can be folded into conv2d bias add. `SimplifyExpr` pass finds such two consecutive `add` with constant rhs, and fold them into one `add`. --- [Visit Topic](https://discuss.tvm.apac

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread masahi via Apache TVM Discuss
Have you run `bind_param_by_name`? https://github.com/apache/tvm/blob/ac6607282e080dc15cce7d9cf565f5d390ba0f16/tests/python/relay/test_pass_fold_constant.py#L341 --- [Visit Topic](https://discuss.tvm.apache.org/t/batchnorm-op-fusion-in-tvm/12391/4) to respond. You are receiving this beca

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread masahi via Apache TVM Discuss
You need to apply `FoldScaleAxis` after `FoldConstant`. See https://github.com/apache/tvm/blob/ac6607282e080dc15cce7d9cf565f5d390ba0f16/tests/python/relay/test_pass_fold_constant.py#L316-L323 --- [Visit Topic](https://discuss.tvm.apache.org/t/batchnorm-op-fusion-in-tvm/12391/2) to respond

[Apache TVM Discuss] [Questions] Can One reduce stage fuse into another reduce stage?

2022-03-21 Thread masahi via Apache TVM Discuss
[quote="jnwang, post:1, topic:12367"] I would like to know if it is possible to combine two reduce stages (te.sum) into one reduce stage in te [/quote] I'm not sure what you mean here, but if I take it literally, that wouldn't be feasible. But there is an example of scheduling fused conv2d -

[Apache TVM Discuss] [Questions] Can tvm be applied to the recommendation system?

2022-02-20 Thread masahi via Apache TVM Discuss
Depends on your model. We lack support for `embedding_bag` op, which is very important in DLRM etc. So performance may not be great. --- [Visit Topic](https://discuss.tvm.apache.org/t/can-tvm-be-applied-to-the-recommendation-system/12129/2) to respond. You are receiving this because you

[Apache TVM Discuss] [Questions] Question about Hexagon support status

2022-02-19 Thread masahi via Apache TVM Discuss
This is in a very active development. Simple graphs like https://github.com/apache/tvm/blob/2b35cfd6ddb73afecd3f550f33881e1fdc7c3267/tests/python/contrib/test_hexagon/rpc/test_launcher.py#L190-L202 now runs end to end on Hex (both host and device are hexagon). --- [Visit Topic](https://di

[Apache TVM Discuss] [Questions] How to read out the intermediate value in Relay IR?

2022-02-15 Thread masahi via Apache TVM Discuss
I heard that's possible with `debug_executor`. But I've never tried it. Can you take a look? --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-read-out-the-intermediate-value-in-relay-ir/12084/2) to respond. You are receiving this because you enabled mailing list mode. To unsubs

[Apache TVM Discuss] [Questions] [Pytorch] The inference results of tvm and pytorch are inconsistent

2022-02-09 Thread masahi via Apache TVM Discuss
Interesting, does PyTorch do something like that? It's not obvious to me if we can do this without concern. Would this change make the output of every adaptive avg pool different? What about normal avg pooling? --- [Visit Topic](https://discuss.tvm.apache.org/t/pytorch-the-inference-resul

[Apache TVM Discuss] [Questions] [op testing] single op ir testing batch matmul

2022-02-09 Thread masahi via Apache TVM Discuss
Good catch, can you send a PR to fix it? --- [Visit Topic](https://discuss.tvm.apache.org/t/op-testing-single-op-ir-testing-batch-matmul/12049/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.

[Apache TVM Discuss] [Questions] Run test_cutlass.py error:unexpected keyword argument 'capture_output'

2022-01-05 Thread masahi via Apache TVM Discuss
Now conv2d is fully supported, including residual block fusion. --- [Visit Topic](https://discuss.tvm.apache.org/t/run-test-cutlass-py-error-unexpected-keyword-argument-capture-output/11400/6) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from t

[Apache TVM Discuss] [Questions] Quantized Transformer

2022-01-05 Thread masahi via Apache TVM Discuss
First of all, Ansor is no good for int8, since it cannot use fast int8 hardware (VNNI, tensorcore) at all. * How are you quantizing the model? * What backends are you interested in? CPU or GPU? --- [Visit Topic](https://discuss.tvm.apache.org/t/quantized-transformer/11850/2) to respond.

[Apache TVM Discuss] [Questions] Generate native C code from TVM IR

2021-12-27 Thread masahi via Apache TVM Discuss
I think "native C-code generation" is used for more niche use cases like micro-TVM / embedded, but yes, we don't typically emit native C code for convolution kernels etc. --- [Visit Topic](https://discuss.tvm.apache.org/t/generate-native-c-code-from-tvm-ir/11792/2) to respond. You are r

[Apache TVM Discuss] [Questions] [BYOC][ONNX] Question about indices tensor of gather-scatter ops

2021-12-27 Thread masahi via Apache TVM Discuss
[quote="Nullko, post:1, topic:11778"] My accelerator runtime’s Gather-Scatter ops require `i32` indices tensors, however, by default Relay uses `i64` indices, is there a simple way to set all indices tensors in a Relay graph to `i32` dtype? [/quote] I don't see an easy way for this, may be you

[Apache TVM Discuss] [Questions] RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect

2021-12-27 Thread masahi via Apache TVM Discuss
This is coming from PyTorch, not TVM. --- [Visit Topic](https://discuss.tvm.apache.org/t/runtimewarning-iterating-over-a-tensor-might-cause-the-trace-to-be-incorrect/11779/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [clic

[Apache TVM Discuss] [Questions] Question about what TVM does

2021-12-27 Thread masahi via Apache TVM Discuss
For now, we only support inference. But the community is definitely interested in training support and some people are already working on it. There are some related talks during the TVMcon (recordings will be uploaded next year soon). --- [Visit Topic](https://discuss.tvm.apache.org/t/que

[Apache TVM Discuss] [Questions] How to get layout of relay::callnode and relay function?

2021-12-17 Thread masahi via Apache TVM Discuss
Only certain ops are aware of layout information, so there is no layout information attached to `relay::Function` inputs and most of `relay::Call` node. For layout-aware ops, you can query layout information via op attributes. See example in https://github.com/apache/tvm/blob/main/python/tvm/

[Apache TVM Discuss] [Questions] Confused about kMaxNumGPUs in runtime

2021-11-25 Thread masahi via Apache TVM Discuss
We use threads for parallelism within an operator. By "concurrency" I meant something like asynchronous execution among operators (also called inter-operator parallelism). --- [Visit Topic](https://discuss.tvm.apache.org/t/confused-about-kmaxnumgpus-in-runtime/11536/3) to respond. You a

[Apache TVM Discuss] [Questions] Issue: Converting model from pytorch to relay model

2021-11-23 Thread masahi via Apache TVM Discuss
[quote="AndrewZhaoLuo, post:2, topic:11538"] The onnx frontend is much more mature. [/quote] Be careful with making such claims :slightly_smiling_face: Actually PT frontend is fairly good and I can generally recommend it for PT users. @popojames You are probably using `torch.jit.script`, since

[Apache TVM Discuss] [Questions] Question on fuzzy path matching ---- matching arbitrary number and type of nodes in path

2021-11-18 Thread masahi via Apache TVM Discuss
cc @mbrookhart He may have some insights. --- [Visit Topic](https://discuss.tvm.apache.org/t/question-on-fuzzy-path-matching-matching-arbitrary-number-and-type-of-nodes-in-path/11493/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these em

[Apache TVM Discuss] [Questions] Run test_cutlass.py error:unexpected keyword argument 'capture_output'

2021-11-04 Thread masahi via Apache TVM Discuss
Not yet, but I'll work on conv2d support next. Stay tuned. --- [Visit Topic](https://discuss.tvm.apache.org/t/run-test-cutlass-py-error-unexpected-keyword-argument-capture-output/11400/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these

[Apache TVM Discuss] [Questions] Run test_cutlass.py error:unexpected keyword argument 'capture_output'

2021-11-03 Thread masahi via Apache TVM Discuss
Oh right, I forgot to fix that. This happens on Python 3.6 or older, so can you try 3.7 or 3.8? --- [Visit Topic](https://discuss.tvm.apache.org/t/run-test-cutlass-py-error-unexpected-keyword-argument-capture-output/11400/2) to respond. You are receiving this because you enabled mailing

[Apache TVM Discuss] [Questions] Free(): invalid pointer Aborted

2021-10-30 Thread masahi via Apache TVM Discuss
You need to replace `/path/to/llvm-config` with the actual path to your llvm-config, for example `/usr/bin/llvm-config`. --- [Visit Topic](https://discuss.tvm.apache.org/t/free-invalid-pointer-aborted/11357/7) to respond. You are receiving this because you enabled mailing list mode. To

[Apache TVM Discuss] [Questions] Free(): invalid pointer Aborted

2021-10-30 Thread masahi via Apache TVM Discuss
Please try the new solution posted in https://github.com/apache/tvm/issues/9362#issuecomment-955263494 --- [Visit Topic](https://discuss.tvm.apache.org/t/free-invalid-pointer-aborted/11357/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from t

[Apache TVM Discuss] [Questions] Free(): invalid pointer Aborted

2021-10-30 Thread masahi via Apache TVM Discuss
As the message says, you need to tune for your workload to obtain reasonable performance. `invalid pointer` problem is due to symbol crash between PyTorch and TVM. There is an open issue for this problem. https://github.com/apache/tvm/issues/9362 For now, swapping the import order of tvm and

[Apache TVM Discuss] [Questions] TOpPattern has not been registered for nn.dropout

2021-10-25 Thread masahi via Apache TVM Discuss
@altanh may have something to say about dropout. --- [Visit Topic](https://discuss.tvm.apache.org/t/toppattern-has-not-been-registered-for-nn-dropout/11305/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https:/

[Apache TVM Discuss] [Questions] Add dynamic shape for pytorch expand converter op

2021-08-06 Thread masahi via Apache TVM Discuss
I think you can use `op.concatenate(...)` to turn a list of Expr into a single Expr representing a dynamic shape. The second solution also sounds reasonable. --- [Visit Topic](https://discuss.tvm.apache.org/t/add-dynamic-shape-for-pytorch-expand-converter-op/10723/2) to respond. You are

[Apache TVM Discuss] [Questions] [Relay][Frontend] Why Transpose before GatherND

2021-07-29 Thread masahi via Apache TVM Discuss
I remember this transpose is necessary since our op follows MXNet convention. `gather`, `gather_nd` in MXNet have different expectation on `indices` argument compared to other framework. --- [Visit Topic](https://discuss.tvm.apache.org/t/relay-frontend-why-transpose-before-gathernd/10657/

[Apache TVM Discuss] [Questions] [PyTorch] dyn.strided_slice loses shape information

2021-07-29 Thread masahi via Apache TVM Discuss
Yes your observation is correct. We cannot support PT retinanet for two reasons: * Our dynamic strided slice doesn't work great when input shape is partially static/dynamic. It makes output shape dynamic in all dimensions, even if slicing is only in a certain dimension (batch axis etc). Unfort

[Apache TVM Discuss] [Questions] Dynamic batch (input) support

2021-06-26 Thread masahi via Apache TVM Discuss
Performance is expected to be extremely bad. We cannot tune any workload involving dynamic shapes, while PyTorch uses cuDNN etc that don't have any issue with dynamic shapes. --- [Visit Topic](https://discuss.tvm.apache.org/t/dynamic-batch-input-support/10069/10) to respond. You are rec

[Apache TVM Discuss] [Questions] Dynamic batch (input) support

2021-05-25 Thread masahi via Apache TVM Discuss
No if you want to use dynamic shape, VM is always required. This is because the graph executor assumes that everything is static and preallocates all memory required based on static shape information. --- [Visit Topic](https://discuss.tvm.apache.org/t/dynamic-batch-input-support/10069/6)

[Apache TVM Discuss] [Questions] Do TVM support quantilization itself?

2021-05-24 Thread masahi via Apache TVM Discuss
It does but functionalities are quite limited and it is not actively developed. There is an on going proposal to rework our quantization support, see https://discuss.tvm.apache.org/t/rfc-quantization-a-new-quantization-framework-in-tvm-initial-rfc-1-4/9775 --- [Visit Topic](https://discus

[Apache TVM Discuss] [Questions] Autoscheduler on faster rcnn, stuck on measurement

2021-05-17 Thread masahi via Apache TVM Discuss
See my script on how I tune faster rcnn: https://github.com/masahi/torchscript-to-tvm/blob/master/maskrcnn/maskrcnn_test.py#L101 You probably want to use bigger timeout (`timeout=100` in my code) --- [Visit Topic](https://discuss.tvm.apache.org/t/autoscheduler-on-faster-rcnn-stuck-on-meas

[Apache TVM Discuss] [Questions] RuntimeError: Could not find 'input_1' in graph's inputs

2021-05-12 Thread masahi via Apache TVM Discuss
sorry what you need is `print(irmod)`. Look for the name after `main` like below: ``` def @main(%input_tensor:0: Tensor[(1, 300, 300, 3), uint8]) -> (Tensor[(1, 100, 4), float32], Tensor[(1, 100), float32], Tensor[(1, 100), float32], Tensor[(1), float32]) { ``` --- [Visit Topic](https:/

[Apache TVM Discuss] [Questions] RuntimeError: Could not find 'input_1' in graph's inputs

2021-05-12 Thread masahi via Apache TVM Discuss
Can you try `print(graph)` and see what input name it expects. --- [Visit Topic](https://discuss.tvm.apache.org/t/runtimeerror-could-not-find-input-1-in-graphs-inputs/9890/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [clic

[Apache TVM Discuss] [Questions] Representative Model Zoo

2021-05-06 Thread masahi via Apache TVM Discuss
I think our frontend tutorials are the closest to what we can call "model zoo". I agree that having more collection of ready-to-run models, preferably with auto-tuned config, would be valuable. Recently I looked at [the model zoo in openvino](https://github.com/openvinotoolkit/open_model_zoo

[Apache TVM Discuss] [Questions] Use custom C++ code with TVM

2021-05-03 Thread masahi via Apache TVM Discuss
You should be able to do that by adding your cpp under src/runtime/contrib. See many examples there. --- [Visit Topic](https://discuss.tvm.apache.org/t/use-custom-c-code-with-tvm/9864/13) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these

[Apache TVM Discuss] [Questions] Use custom C++ code with TVM

2021-05-03 Thread masahi via Apache TVM Discuss
Oh you are trying to add your packed function outside of `libtvm.so` or `libtvm_runtime.so`? I've never seen this and am not sure if that is going to work, because `libtvm.so` needs to "see" your function cc @tqchen --- [Visit Topic](https://discuss.tvm.apache.org/t/use-custom-c-code-with

[Apache TVM Discuss] [Questions] Use custom C++ code with TVM

2021-05-03 Thread masahi via Apache TVM Discuss
hmm it looks correct. Maybe you can try make clean and rebuild? Also to be sure, you can put your function to the same file as `cublas.cc` etc --- [Visit Topic](https://discuss.tvm.apache.org/t/use-custom-c-code-with-tvm/9864/9) to respond. You are receiving this because you enabled mail

[Apache TVM Discuss] [Questions] Use custom C++ code with TVM

2021-04-30 Thread masahi via Apache TVM Discuss
See for example how we integrate cublas: https://github.com/apache/tvm/blob/main/src/runtime/contrib/cublas/cublas.cc#L333 https://github.com/apache/tvm/blob/813136401a11a49d6c15e6013c34dd822a5c4ff6/python/tvm/contrib/cublas.py#L44-L52 --- [Visit Topic](https://discuss.tvm.apache.org/t/use

[Apache TVM Discuss] [Questions] Use custom C++ code with TVM

2021-04-29 Thread masahi via Apache TVM Discuss
Yeah Bring your own codegen (BYOC) is probably what you are looking for. Have you seen this doc? https://tvm.apache.org/2020/07/15/how-to-bring-your-own-codegen-to-tvm We have BYOC examples of DNNL and ARM ethos, but in your case it would be much simpler. cc @comaniac --- [Visit Topic

[Apache TVM Discuss] [Questions] Use custom C++ code with TVM

2021-04-29 Thread masahi via Apache TVM Discuss
That would indeed be less code, but I'd say it is more invasive. BYOC also scales better if users want to use more custom ops. --- [Visit Topic](https://discuss.tvm.apache.org/t/use-custom-c-code-with-tvm/9864/4) to respond. You are receiving this because you enabled mailing list mode.

[Apache TVM Discuss] [Questions] Check failed:allow_missing ==false :device API gpu is not enabled

2021-03-25 Thread masahi via Apache TVM Discuss
You need to use `tvm.cl(0)` for opencl target. Or better: `tvm.context(target, 0)` --- [Visit Topic](https://discuss.tvm.apache.org/t/check-failed-allow-missing-false-device-api-gpu-is-not-enabled/9532/2) to respond. You are receiving this because you enabled mailing list mode. To unsub

[Apache TVM Discuss] [Questions] Relay VM newbie: Porting Relay VM question

2021-02-08 Thread masahi via Apache TVM Discuss
Relay VM ops are TVM specific, I don't think this is something you want to port. --- [Visit Topic](https://discuss.tvm.apache.org/t/relay-vm-newbie-porting-relay-vm-question/9097/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails

[Apache TVM Discuss] [Questions] Index_put operator in Relay

2021-02-08 Thread masahi via Apache TVM Discuss
Probably `scatter_nd`, see https://github.com/apache/tvm/pull/6854 --- [Visit Topic](https://discuss.tvm.apache.org/t/index-put-operator-in-relay/9094/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://disc

[Apache TVM Discuss] [Questions] How to modify and add an operator in pytorch frontend

2020-10-27 Thread masahi via Apache TVM Discuss
We have many examples for operator conversion in frontend/pytorch.py. I don't recommend modifying the max pool implementation. TVM doesn't take into account the indices from max pool, so you need to modify code everywhere. --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-modify-a

[Apache TVM Discuss] [Questions] Relay cannot compile while_loop

2020-10-26 Thread masahi via Apache TVM Discuss
You cannot use `relay.build(...)` to build a model with control flow. For that, you need to use VM. See for example, https://github.com/apache/incubator-tvm/blob/efe3a79aacd934ea5ffb13170230bf199a473e72/tests/python/frontend/pytorch/test_forward.py#L1914 --- [Visit Topic](https://discu

[Apache TVM Discuss] [Questions] Graph_plan_memory doesn't support nested tuples?

2020-10-26 Thread masahi via Apache TVM Discuss
thanks, I'll take a look --- [Visit Topic](https://discuss.tvm.apache.org/t/graph-plan-memory-doesnt-support-nested-tuples/8278/8) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/emai

[Apache TVM Discuss] [Questions] Graph_plan_memory doesn't support nested tuples?

2020-10-26 Thread masahi via Apache TVM Discuss
Ok, thanks! I found the code Jared was probably referring to (t`ransform/memory_plan.py`, `transform/memory_alloc.py`, not sure why they are written in python). I'm going to learn about memory planning and see what I can do. --- [Visit Topic](https://discuss.tvm.apache.org/t/graph-plan-m

[Apache TVM Discuss] [Questions] Understanding TVM/Relay's PartitionGraph()(mod) function

2020-10-26 Thread masahi via Apache TVM Discuss
Isn't it simply a problem of free variables? I suggest replacing ``` f = relay.Function([], result) ``` with ``` f = relay.Function(relay.analysis.free_vars(result), result) ``` --- [Visit Topic](https://discuss.tvm.apache.org/t/understanding-tvm-relays-partitiongraph-mod-function/8290/4)

[Apache TVM Discuss] [Questions] Graph_plan_memory doesn't support nested tuples?

2020-10-25 Thread masahi via Apache TVM Discuss
Hi, the model I'm working on has the following output: ``` ... %1562 = (%1550, %1551, %1552, %1553, %1554, %1555, %1556, %1557, %1558, %1559, %1560, %1561); (%1549, %1562) } ``` i.e., the output is a tuple where the second element is another tuple with 12 elements. `relay.build(...)` er

[Apache TVM Discuss] [Questions] How to use Relay Control Flow?

2020-10-21 Thread masahi via Apache TVM Discuss
can you send a PR to add your implementation of LSTM converter? This is a requested feature (see https://github.com/apache/incubator-tvm/issues/6474) Unrolling is the standard way to implement lstm op conversion. Both MXNet and ONNX frontend do it. I don't recommend pursuing the approach of co

[Apache TVM Discuss] [Questions] Unable to run the tvm tutorial deploy_prequantized.py using putorch

2020-09-23 Thread masahi via Apache TVM Discuss
Unfortunately, our quantized PyTorch model support is completely broken for PyTorch 1.6, due to a serious bug they introduced, and that's the error you would get if you try. See https://github.com/pytorch/pytorch/issues/42497 Other than waiting for them to fix this, we have no plan at the mom

[Apache TVM Discuss] [Questions] Import RNN-T pytorch model into TVM

2020-09-15 Thread masahi via Apache TVM Discuss
Ok, I was able to reproduce the issue. It seems supporting `aten::lstm` is complicated and I'm not an expert on LSTM. I created an issue https://github.com/apache/incubator-tvm/issues/6474 to ask for a help. For now, I recommend exporting the model to ONNX, and use our ONNX frontend, since it

[Apache TVM Discuss] [Questions] Import RNN-T pytorch model into TVM

2020-09-14 Thread masahi via Apache TVM Discuss
sorry can you make a git repo with all necessary files? --- [Visit Topic](https://discuss.tvm.apache.org/t/import-rnn-t-pytorch-model-into-tvm/7874/12) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discus

[Apache TVM Discuss] [Questions] Import RNN-T pytorch model into TVM

2020-09-14 Thread masahi via Apache TVM Discuss
can you show me your script so that I can reproduce your problem? --- [Visit Topic](https://discuss.tvm.apache.org/t/import-rnn-t-pytorch-model-into-tvm/7874/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https

[Apache TVM Discuss] [Questions] Import RNN-T pytorch model into TVM

2020-09-10 Thread masahi via Apache TVM Discuss
You are using torch.jit.script. Please try torch.jit.trace. --- [Visit Topic](https://discuss.tvm.apache.org/t/import-rnn-t-pytorch-model-into-tvm/7874/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://dis