[TVM Discuss] [Development] ONNX model compilation fails with a model that previously worked

2019-04-01 Thread Jared Roesch via TVM Discuss
It should be disabled by default, it is set at optimization level 2, so I'm not sure why it is executing. Can you try: ``` with relay.build_module.build_config(opt_level=2): graph_json, lib, params = relay.build_module.build(...) ``` --- [Visit Topic](https://discuss.tvm.ai/t/onnx-mode

[TVM Discuss] [Development] ONNX model compilation fails with a model that previously worked

2019-04-01 Thread mnboos via TVM Discuss
Sure, how can I turn off the optimization? I didn't actively enable it. --- [Visit Topic](https://discuss.tvm.ai/t/onnx-model-compilation-fails-with-a-model-that-previously-worked/2081/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these

[TVM Discuss] [Development] ONNX model compilation fails with a model that previously worked

2019-04-01 Thread Jared Roesch via TVM Discuss
This looks like someone introduced a bug or regression into the alter layout pass, could you open an issue against dmlc master so we can CC the appropriate people to work on this. You can try to turn off the alter-layout optimization if you want to make progress. --- [Visit Topic](https:/

[TVM Discuss] [Development] ONNX model compilation fails with a model that previously worked

2019-04-01 Thread mnboos via TVM Discuss
Yesterday I pulled the latest code and installed from source, which went without problems. After that, I tried to compile an MNIST onnx model as first test, which failed with the stacktrace below: The model can be downloaded here: https://we.tl/t-Tghn9o9EQ8 (md5: 9fc8b23aa4f33008360727d2fe1b0

Re: [dmlc/tvm] [RFC] Register Relay VM design (#2915)

2019-04-01 Thread Wei Chen
## Summary @tqchen @icemelon9 @jroesch @zhiics @yongwww we discuss in person. Reached the following consensus: 1. Remove `Phi` instruction. Instead extend `If` to write the result to a new register. 2. Reuse the existing value stack as the register file. Have an anchor in the function frame to

[TVM Discuss] [Development] Low efficiency on my own cpu

2019-04-01 Thread eqy via TVM Discuss
In any case, I would recommend autotuning first to see if that makes a difference. --- [Visit Topic](https://discuss.tvm.ai/t/low-efficiency-on-my-own-cpu/2030/13) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](ht

Re: [dmlc/tvm] [RFC][EXPR] Formalize Integer Arithmetic Analysis (#2588)

2019-04-01 Thread Tianqi Chen
What I mean is that we can support memoization even if we don’t do the functionally style, as I outlined in the last post -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2588#issuecomment-4

Re: [dmlc/tvm] [RFC][EXPR] Formalize Integer Arithmetic Analysis (#2588)

2019-04-01 Thread Sergei Grechanik
@tqchen I guess we cannot know for sure: the performance benefits of memoization may outweigh the possible performance losses due to immutability. What is more important is that pure functions often make things more clear to think about. -- You are receiving this because you are subscribed to

[TVM Discuss] [Development] How to debug and print out content of IndexExpr?

2019-04-01 Thread Yizhi Liu via TVM Discuss
`LOG(INFO) << oshape;` ? --- [Visit Topic](https://discuss.tvm.ai/t/how-to-debug-and-print-out-content-of-indexexpr/2039/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/b