[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread tqchen via TVM Discuss
That dispatching logic ca certainly be simplified as a one-liner, which will reduce the memo logic addition to be about 10 loc. ```c++ Result VisitExpr(const& expr) final { auto it = memo_.find(expr); if (it != memo_.end()) { return it->second; } res = ParentClass::VisitExpr(expr);

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread masahi via TVM Discuss
[quote="tqchen, post:6, topic:6334"] That dispatching logic ca certainly be simplified as a one-liner, which will reduce the memo logic addition to be about 10 loc [/quote] Yes, the point is each derived class ends up having the exact same 10 loc. Until now we have 2 or 3 cases so that might b

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread tqchen via TVM Discuss
While it is always possible to introduce more re-use by adding new layers of abstractions. There is also additional cost of introduce more abstraction(of sub-classing). So it is usually a trade-off. In my experience, I think 10 loc duplication is fine, as long as this pattern is clearly docu

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
Yeah, I am not a big fun of introducing this base class either as I think the only duplication code would be really just the caching map. If you are concerning about that 10 locs. I can actually just do it this way, I can actually remove them and replace it by calling the Functor::VisitExpr(e

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread tqchen via TVM Discuss
You can overload VisitDefaultExpr to add that error(for unsupported code) if you want a custom error message --- [Visit Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/10) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe fr

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread tqchen via TVM Discuss
Thanks @masahi @zhiics for great discussions so far, would be great to also get your thoughts wrt to C0, C1, C2, C3 style in the long run, and whether do we need non-recursive support for this part --- [Visit Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/11) to

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
ahh, I didn't notice we have this one. Thanks. --- [Visit Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/12) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/uns

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
To be honest, among C0-C3 I wouldn't not want to introduce ANF to codegen. This means we either want to do ANF on the whole program or run the pass internally in the extern codegen to convert it. If we run it on the whole program, I think some passes that work on the DFG would not work well/or

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread masahi via TVM Discuss
Since the new base class would be as simple as the one below, I don't think there is much of abstraction cost. I don't see why we should prefer duplicating the same `VisitExpr(const Expr& n)` over this solution. ``` template class MemoizedExprFunctor : public ::tvm::relay::ExprFunctor { usi

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread Zhi via TVM Discuss
I have another thought on this, how about just put this one in the backend/utils.h since the current usage of them would be for the code under there? For general passes, it might be different though (like, to_a_norm_form, to_cps, PE, etc) --- [Visit Topic](https://discuss.tvm.ai/t/missin

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread tqchen via TVM Discuss
Seems that a general concensus so far is we can put such as class that @masahi suggested as an internal header. It is always good to discuss the alernatives, the tradeoffs. Such discussions helps us to reach a better code quality overall. When there are potentially disagreements, it is also us

[TVM Discuss] [Development/RFC] Allow non-nullable Object and Introduce Optional

2020-04-12 Thread tqchen via TVM Discuss
POC https://github.com/apache/incubator-tvm/pull/5314 --- [Visit Topic](https://discuss.tvm.ai/t/allow-non-nullable-object-and-introduce-optional-t/6337/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://di

[TVM Discuss] [Development/RFC] Allow non-nullable Object and Introduce Optional

2020-04-12 Thread tqchen via TVM Discuss
We use ObjectRef and their sub-classes extensively throughout our codebase. Each of ObjectRef's sub-classes are nullable, which means they can hold nullptr as their values. While in some places we need nullptr as an alternative value. The implicit support for nullptr in all ObjectRef creates ad

[TVM Discuss] [Development/RFC] Allow non-nullable Object and Introduce Optional

2020-04-12 Thread tqchen via TVM Discuss
Some related discussions: making parameters optional certainly makes many of the Attrs more informative during compile-time. ### Benefit of `Optional` and Non-Nullable Refs. For example, in the case of the topi operator `sum(x, axis)`, the true type of axis is `Optional>`. Making this intenti

[TVM Discuss] [Development] Missing memoization in ExprFunctor

2020-04-12 Thread masahi via TVM Discuss
[quote="zhiics, post:15, topic:6334"] I have another thought on this, how about just put this one in the backend/utils.h since the current usage of them would be for the code under there? [/quote] Yes, that's where I'd put this class in, given the current usage of `ExprFunctor`. [quote="tqc

[TVM Discuss] [Development/RFC] [DISCUSS] Module based Model Runtime Interface

2020-04-12 Thread Windclarion via TVM Discuss
const unsigned char __tvm_dev_mblob[46788038] = {"TVM_BLOB_SIG"}; maybe not enough. because 46788038 is too big for many embedded system, so I have to place __tvm_dev_mblob to special section, for example, a rodata section. so I mean I need declare __tvm_dev_mblob as const unsigned char

[TVM Discuss] [Development/RFC] Allow non-nullable Object and Introduce Optional

2020-04-12 Thread Junru Shao via TVM Discuss
Thank you for bringing this proposal. Overall it looks very nice - it saves us a lot of engineering effort to check nulls, and is a stronger convention that could be adopted in the codebase. I am more concerned about the upgrading plan. As for now, nullable ObjectRef is still allowed, but som

[TVM Discuss] [Development/RFC] [DISCUSS] Module based Model Runtime Interface

2020-04-12 Thread Zhao Wu via TVM Discuss
Thanks for respond.Finally, we don't use this special hack. We will generate this directly using LLVM IR. And LLVM will put this into `rodata` section correctly. Like this test: ![image|690x165](upload://nkm1SoLvI1b36CZyZHWAIRUd7bi.png) ![image|690x397](upload://pXekhJ0Qe1ilLipMCtLW5JCP3DP.

[TVM Discuss] [Development/RFC] Allow non-nullable Object and Introduce Optional

2020-04-12 Thread tqchen via TVM Discuss
Right now the not-null ObjectRef is **opt-in**. That means by default an ObjectRef is nullable. We can gradually change the Ref types to not nullable, the steps are: - Change macro to `TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS`, - Remove the default constructor(if already defined) that corre

[TVM Discuss] [Development/RFC] [DISCUSS] Module based Model Runtime Interface

2020-04-12 Thread Windclarion via TVM Discuss
Good solution! Thanks FrozenGene! but if we use LLVM, llvm series target can take advantage of this solution, I'm not sure if other targets such as cuda can use this solution. --- [Visit Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/54) to respond

Re: [apache/incubator-tvm] [DEV] TVM v0.7 Roadmap (#4845)

2020-04-12 Thread shoubhik
What is the expected time of release for this release? what are the chances of it happening in May? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-612762262

[TVM Discuss] [Development/RFC] [DISCUSS] Module based Model Runtime Interface

2020-04-12 Thread Zhao Wu via TVM Discuss
CUDA also could use this. Because cuda's target host is LLVM. As the example I show, it is in fact cuda target. So you could see `NVIDIA NNVM Compiler` in the constant string. --- [Visit Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/55) to respond. Yo

[TVM Discuss] [Development/RFC] [RFC] CoreML Runtime

2020-04-12 Thread Zhao Wu via TVM Discuss
I think leveraging Apple’s Neural Engine is one good motivation (we could add one example how to leverage this). As we have TFLite's runtime, I think add CoreML runtime is reasonable. [quote="kazum, post:1, topic:6309"] Instead, we compile a CoreML model with the xcode `coremlc` command. [/qu

[TVM Discuss] [Development/RFC] [DISCUSS] Module based Model Runtime Interface

2020-04-12 Thread Windclarion via TVM Discuss
I got it. Thanks FrozenGene. --- [Visit Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/56) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscrib