[TVM Discuss] [Development/RFC] [RFC][µTVM] Standalone µTVM Roadmap

2020-06-16 Thread Tom Gall via TVM Discuss
Thanks for putting this out Andrew. Lots to unpack, but generally looks like a good list. On #6, I suspect a few of those stats could be gathered and reported prior to shipping down to a device. (like mbed does for instance) On #2, I wonder if perhaps something that runs daily against master

[TVM Discuss] [Development/RFC] Dynamic Ops in Relay

2020-06-16 Thread Matthew Brookhart via TVM Discuss
https://github.com/apache/incubator-tvm/pull/5826 --- [Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/16) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/5e76

Re: [apache/incubator-tvm] [RFC] Improve quantized convolution performance for armv8 architectures (#5754)

2020-06-16 Thread Animesh Jain
@FrozenGene Can you please review when you get time? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/5754#issuecomment-644902827

[TVM Discuss] [Development/RFC] [RFC][µTVM] Standalone µTVM Roadmap

2020-06-16 Thread Andrew Reusch via TVM Discuss
Hi @manupa-arm, Thanks for reading over the RFC! BYO = bring your own; this would be akin to allowing developers to reimplement vmalloc. This comment mostly reflects how the CRT is organized today, but some changes may need to be made to the compilation process to make it easy to replace the

[TVM Discuss] [Development/RFC] [RFC][µTVM] Standalone µTVM Roadmap

2020-06-16 Thread Manupa Karunaratne via TVM Discuss
Thanks for the RFC @areusch -- especially posting this ahead of the meetup. I am trying to understand some bits around the subgoal of BYO memory allocator. Would you be able to elaborate more on this ? (as to is this about allocating tensors with memory blocks/regions/addresses ? If so are we

[TVM Discuss] [Development] [PyTorch] [Frontend] graph input names can change using loaded torchscript

2020-06-16 Thread Thomas V via TVM Discuss
Actually, this can happen in the body of the function, but not here because the inputs actually come from a function signature. You can print `traced_module.code` to witness the translation (that is from where I tracked down the function reproducing the non-processed names). Another place where

[TVM Discuss] [Development/RFC] Add `init` option to ReduceNode to initialize with custom Tensors

2020-06-16 Thread Anirudh via TVM Discuss
@tqchen, @ziheng I saw that you both are the ones who have worked on ReduceNode. Could you provide me with any inputs you have about this idea, and whether I can go ahead and work on adding this option to the ReduceNode. --- [Visit Topic](https://discuss.tvm.ai/t/add-init-option-to-reduce

[TVM Discuss] [Development] [PyTorch] [Frontend] graph input names can change using loaded torchscript

2020-06-16 Thread Jeremy Johnson via TVM Discuss
Unfortunately I think this will not help if you have two inputs called `input.0` and `input.1` (this is allowed). These will get remapped to something new like `input.X` and `input.Y` and it will be an assumption to work out which is which. Unless I am missing something? --- [Visit Topi

[TVM Discuss] [Development] [DISCUSS] The meaning of "float" in Relay

2020-06-16 Thread Thomas V via TVM Discuss
So far the PR only changes the default. Is there an example of the strict mode that could be followed? Also my changes for the PyTorch backend intertwined with the fixes to deal with non-fp32 types in general (probably a property of my branch rather than a necessity), and I would not want to r