`RelayExprNode` represents the `ObjectPtr` in the AST. The same node might need
to be referenced at different places across code base, and so the `RelayExpr`
is a reference counted object that points to the `RelayExprNode`. You can think
of `RelayExpr` as a type of shared_ptr to the underlying
Some of the graph can't be optimized (they were showing N/A), i am not sure if
this is a bug. resnet18 is okay and works fine.

Here are my code to optimize with tensorcore.
[optimize.py
(github.com)](https://gist.github.com/twmht/d
If we don't pad input shape to 16, and only the first convolution on input
would not use tensorcore? because padding to 16 also brings some cost when
doing convolution, which may not be accelerated even using tensorcore.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/metaschedule-tens
I don't quite understand the phrase "managed reference". Could anyone explain
it to me? Really appreciate your help! 
---
[Visit
Topic](https://discuss.tvm.apache.org/t/questions-about-ir-expressions-relayexpr-relay
Hello. I'm reading C++ source code and quite confused by two classes related to
RelayIR - RelayExpr and RelayExprNode. I'm wondering what's the relationship
between these two types? Since RelayExpr can be cast to RelayExprNode* by the
method "get()", and RelayExprNode can be cast to RelayExpr
I have no more ideas about RPI4. The codegen workflow should be almost the same
on RPI4 and MAC with LLVM. Did you try the target
`tvm.target.arm_cpu("rasp4b")` or `tvm.target.arm_cpu("rasp4b64")`?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/trouble-building-examples-on-rpi4-macos-