cc @FrozenGene who might have some past experience in perf investigation
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-further-improve-the-performance-of-given-schedule/7711/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails,
cc @kazum who might have some experience. iOS requires a special RPC , you can
find some instructions here
https://github.com/apache/incubator-tvm/tree/master/apps/ios_rpc
---
[Visit
Topic](https://discuss.tvm.ai/t/auto-tvm-how-to-auto-tune-the-model-on-ios-device/7681/3)
to respond.
Yo
please checkout https://tvm.apache.org/docs/dev/index.html
---
[Visit Topic](https://discuss.tvm.ai/t/how-to-divide-tvm-components/7667/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email
you can call into the specific name you pass to the build function
`tvm.build(..., name="myfunc")`.
```
mod["myfunc"](args)
```
---
[Visit
Topic](https://discuss.tvm.ai/t/building-tvm-with-c-runtime-support/7006/4) to
respond.
You are receiving this because you enabled mailing list mode
We will need to export the library as a shared library before executing it. You
can do `lib.export_library("xyz.so")` then load it back
---
[Visit
Topic](https://discuss.tvm.ai/t/building-tvm-with-c-runtime-support/7006/2) to
respond.
You are receiving this because you enabled mailing li
sorry to miss the thread during the weekend, it is certainly a bug we should
fix, @t-vi can you send a patch?
---
[Visit
Topic](https://discuss.tvm.ai/t/tvm-relay-build-modifying-its-argument/6958/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscrib
The easuest way to do so might be starting the RPC tracker and serve
separately(and not use LocalRunner)
---
[Visit Topic](https://discuss.tvm.ai/t/jupyter-notebooks-and-autotuning/6973/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these
cc @jroesch
The IRModule itself is indeed mutable, but right now the relay pass infra
tries to create a new module when possible(via copy on write), if you have a
particular example and dig a bit into where the mutaiton happen, we can try to
work on fixing it.
---
[Visit
Topic](https:
https://github.com/apache/incubator-tvm/pull/5649
---
[Visit
Topic](https://discuss.tvm.ai/t/parameters-silent-might-not-be-used/6753/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/
It is due to an update from the xgboost side that deprecates the silent
parameter, we should change to verbosity=0
---
[Visit
Topic](https://discuss.tvm.ai/t/parameters-silent-might-not-be-used/6753/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscr
Please checkout the instruction under the docs folder. Also the CI scripts
under tests would be helpful
---
[Visit Topic](https://discuss.tvm.ai/t/how-to-generate-docs-locally/6387/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emai
You will need to compile with miopen header in your include path.
Alternatively, you can remove the miopen.cc, this won’t affect the autotvm part
---
[Visit
Topic](https://discuss.tvm.ai/t/rocm-segmentation-fault-error-when-auto-tuning/6402/4)
to respond.
You are receiving this because y
Youw will need to setup an RPC server explicitly as per
https://github.com/apache/incubator-tvm/tree/master/apps/rocm_rpc due to a
limitation of the rocm driver
---
[Visit
Topic](https://discuss.tvm.ai/t/rocm-segmentation-fault-error-when-auto-tuning/6402/2)
to respond.
You are receivin
We should pass by the ObjectRef in most part of the codebase. The only
exception for now is the Functor dispatching classes, where the first argument
is the Object node class itself, and can be viewed as a Weak reference to the
original node.
There are some interest in moving the functor disp
What you described certainly works and is the approach we used so far :)
Depending on the demand, we can also start to think about making PackedFunc as
a sub-class of an ObjectRef
---
[Visit
Topic](https://discuss.tvm.ai/t/how-can-i-transfer-a-list-of-functions/6239/3)
to respond.
You
It depends on the case, some floating pt unit have longer time for NA and
different values. If it is a fixed cycle unit, we might be fine
---
[Visit
Topic](https://discuss.tvm.ai/t/do-not-write-tensor-data-in-microtvm-with-autotvm/6109/9)
to respond.
You are receiving this because you en
you can use tvm.nd.empty to achieve the same goal. Note that however, if the
memory is still need to be initialized(perhaps via a remote RPC call),
otherwise it can impact the perf
---
[Visit
Topic](https://discuss.tvm.ai/t/do-not-write-tensor-data-in-microtvm-with-autotvm/6109/7)
to res
we could also explore a possibly alternative approach to hook up spike
directly(not via openOCD), which might enable us to get around the problem of
data copy speed(via directly memory copy shortcuts into the simulator).
openOCD is only one way to implement
https://github.com/apache/incubat
All of TVM's IR nodes (this of course include IRModule) are serializable.
- See https://docs.tvm.ai/api/python/ir.html#tvm.ir.load_json
- pickle will also work
- If a readable text format is desired, we can also use astext to print it into
a text format then use the parser to parse it back
that is part of the python binding
---
[Visit Topic](https://discuss.tvm.ai/t/where-is-dldatatype-defined/6071/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/f969c436166d
DLDataType is defined in the 3rdparty/dlpack/include/dlpack.h
---
[Visit Topic](https://discuss.tvm.ai/t/where-is-dldatatype-defined/6071/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/ema
You are absolutely right. libtvmruntime only library would be a good way to go.
Right now(before 0.7) we can do that from source by typing `make runtime`
instead of make
---
[Visit Topic](https://discuss.tvm.ai/t/deployment-to-pytorch-dlpack/6069/5) to
respond.
You are receiving this bec
We are in the process of doing quite a bit of refactoring in terms of the
runtime in this release cycle. We do hope to get some packaging story for 0.7
through pip or conda
---
[Visit Topic](https://discuss.tvm.ai/t/deployment-to-pytorch-dlpack/6069/3) to
respond.
You are receiving this
It is actually possible to get multiple versions of function compiled together.
Just need a bit of extra effort to do so.
One way to achieve that now is to call tvm.lower(instead of buld) to get
List[LoweredFuncs] for each of the function you care about(give them different
names), then conca
The integer floormod/div is now supported in low level codegen, so it is just a
matter of exposing them to relay
---
[Visit
Topic](http://tracking.discuss.tvm.ai/tracking/click?d=1gm_cre9B1NJEXcmlvQA2cc9v5uL6uInipbRnWYa6Y8fWGVIsVdDlo2-fQuk0w0QE3rn8OfNFrF858EfaEbQUNasDjAiEkgnO5yXxVMubKaP2wV
25 matches
Mail list logo