That dispatching logic ca certainly be simplified as a one-liner, which will
reduce the memo logic addition to be about 10 loc.
```c++
Result VisitExpr(const& expr) final {
auto it = memo_.find(expr);
if (it != memo_.end()) {
return it->second;
}
res = ParentClass::VisitExpr(expr);
[quote="tqchen, post:6, topic:6334"]
That dispatching logic ca certainly be simplified as a one-liner, which will
reduce the memo logic addition to be about 10 loc
[/quote]
Yes, the point is each derived class ends up having the exact same 10 loc.
Until now we have 2 or 3 cases so that might b
While it is always possible to introduce more re-use by adding new layers of
abstractions. There is also additional cost of introduce more abstraction(of
sub-classing). So it is usually a trade-off.
In my experience, I think 10 loc duplication is fine, as long as this pattern
is clearly docu
Yeah, I am not a big fun of introducing this base class either as I think the
only duplication code would be really just the caching map. If you are
concerning about that 10 locs. I can actually just do it this way, I can
actually remove them and replace it by calling the
Functor::VisitExpr(e
You can overload VisitDefaultExpr to add that error(for unsupported code) if
you want a custom error message
---
[Visit
Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/10) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fr
Thanks @masahi @zhiics for great discussions so far, would be great to also
get your thoughts wrt to C0, C1, C2, C3 style in the long run, and whether do
we need non-recursive support for this part
---
[Visit
Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/11) to
ahh, I didn't notice we have this one. Thanks.
---
[Visit
Topic](https://discuss.tvm.ai/t/missing-memoization-in-exprfunctor/6334/12) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/uns
To be honest, among C0-C3 I wouldn't not want to introduce ANF to codegen. This
means we either want to do ANF on the whole program or run the pass internally
in the extern codegen to convert it. If we run it on the whole program, I think
some passes that work on the DFG would not work well/or
Since the new base class would be as simple as the one below, I don't think
there is much of abstraction cost. I don't see why we should prefer duplicating
the same `VisitExpr(const Expr& n)` over this solution.
```
template
class MemoizedExprFunctor : public ::tvm::relay::ExprFunctor {
usi
I have another thought on this, how about just put this one in the
backend/utils.h since the current usage of them would be for the code under
there? For general passes, it might be different though (like, to_a_norm_form,
to_cps, PE, etc)
---
[Visit
Topic](https://discuss.tvm.ai/t/missin
Seems that a general concensus so far is we can put such as class that @masahi
suggested as an internal header. It is always good to discuss the alernatives,
the tradeoffs. Such discussions helps us to reach a better code quality overall.
When there are potentially disagreements, it is also us
POC https://github.com/apache/incubator-tvm/pull/5314
---
[Visit
Topic](https://discuss.tvm.ai/t/allow-non-nullable-object-and-introduce-optional-t/6337/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://di
We use ObjectRef and their sub-classes extensively throughout our codebase.
Each of ObjectRef's sub-classes are nullable, which means they can hold nullptr
as their values.
While in some places we need nullptr as an alternative value. The implicit
support
for nullptr in all ObjectRef creates ad
Some related discussions: making parameters optional certainly makes many of
the Attrs more informative during compile-time.
### Benefit of `Optional` and Non-Nullable Refs.
For example, in the case of the topi operator `sum(x, axis)`, the true type of
axis is `Optional>`. Making this intenti
[quote="zhiics, post:15, topic:6334"]
I have another thought on this, how about just put this one in the
backend/utils.h since the current usage of them would be for the code under
there?
[/quote]
Yes, that's where I'd put this class in, given the current usage of
`ExprFunctor`.
[quote="tqc
const unsigned char __tvm_dev_mblob[46788038] = {"TVM_BLOB_SIG"}; maybe not
enough. because 46788038 is too big for many embedded system, so I have to
place __tvm_dev_mblob to special section, for example, a rodata section. so
I mean I need declare __tvm_dev_mblob as const unsigned char
Thank you for bringing this proposal. Overall it looks very nice - it saves us
a lot of engineering effort to check nulls, and is a stronger convention that
could be adopted in the codebase.
I am more concerned about the upgrading plan. As for now, nullable ObjectRef is
still allowed, but som
Thanks for respond.Finally, we don't use this special hack. We will generate
this directly using LLVM IR. And LLVM will put this into `rodata` section
correctly.
Like this test:

 that corre
Good solution! Thanks FrozenGene! but if we use LLVM, llvm series target can
take advantage of this solution, I'm not sure if other targets such as cuda
can use this solution.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/54)
to respond
What is the expected time of release for this release? what are the chances of
it happening in May?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4845#issuecomment-612762262
CUDA also could use this. Because cuda's target host is LLVM. As the example I
show, it is in fact cuda target. So you could see `NVIDIA NNVM Compiler` in the
constant string.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/55)
to respond.
Yo
I think leveraging Appleās Neural Engine is one good motivation (we could add
one example how to leverage this). As we have TFLite's runtime, I think add
CoreML runtime is reasonable.
[quote="kazum, post:1, topic:6309"]
Instead, we compile a CoreML model with the xcode `coremlc` command.
[/qu
I got it. Thanks FrozenGene.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/56)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscrib
24 matches
Mail list logo