sorry for the delay here--no, you don't need an RPC tracker. in the future,
when we support distributing tuning jobs to more than 1 board, a tracker may be
needed functionally, but i suspect we would hide that under the covers.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/understand
@fPecc thanks for your questions! Here are some answers.
[quote="fPecc, post:1, topic:12091"]
* AutoTVM running on the host computer
* Multiple boards connected to the same host computer running the AutoTVM
schedules.
[/quote]
We've demonstrated the first one, but not the second one yet. It sh
@huanchunye it should be possible, yes. however, it depends on the format you
use to export from tvmc (for example, Model Library Format is not meant to be
loaded using `load_module`). could you give some more detail of what you're
trying to do?
---
[Visit Topic](https://discuss.tvm.apach
Sorry for the long delay!
[quote="sho, post:7, topic:11569"]
[quote="areusch, post:6, topic:11569"]
There are actually two GraphExecutor implementations in TVM: [one
](https://github.com/apache/tvm/blob/main/src/runtime/graph_executor/graph_executor.cc)
for the C++ Runtime and [one
](https://
There are a couple of different targets that output something so similar to C
(e.g. CUDA, OpenCL) that some some of the functionality was extracted into a
common superclass, `CodeGenC`. When you specify `target="c"`, it uses
`CodeGenCHost` in `codegen_c_host.cc`. You might look at that for mor
[quote="sho, post:12, topic:11682"]
Like there might be some minor architectures (say for some minor
microcontrollers) that LLVM doesn’t support (so we have to develop LLVM backend
ourselves to be able to emit executables for that minor architecture).
[/quote]
Yep--we intend to keep the `c` ba
[quote="sho, post:10, topic:11682"]
it means you get C code(not objects or executables compiled by LLVM) anyway
right?
[/quote]
Yes. However, the main reason we suggest the C route right now is because of
some cleanup we need to do around AOT code generation. It should be possible to
use LLVM
[quote="sho, post:6, topic:11682"]
If you specify like below,
> ```
> TARGET = tvm.target.target.micro("stm32f746xx")
BOARD = "nucleo_f746zg" # or "stm32f746g_disco#"
> ```
Does it always mean you generate C?
[/quote]
Yeah--this is just a shortcut:
https://github.com/apache/tvm/blob/main/pyth
@sho no problem, apologies if this is a bit confusing.
[quote="sho, post:5, topic:11682"]
As you probably know, I was confused about LLVM, and how it is used. So LLVM is
used to build TVM compiler. LLVM also works when you input your model into TVM
compiler and get TVM’s IR. LLVM compiles the
[quote="sho, post:5, topic:11569"]
So the Graph Runtime works on top of C Runtime? Could you please tell me where
the C Runtime actually is? I found the link below but it seems that
graph_executor is written in C++.
[/quote]
There are actually two GraphExecutor implementations in TVM:
[one](h
sorry I should clarify. `libtvm_runtime.so` means the TVM C++ runtime library.
Compiled TVM models need to be ran using a TVM runtime (there are two--the TVM
C++ runtime or the TVM C runtime). The TVM runtime handles details such as
calling the compiled operator functions in graph order and me
@cron I think you're asking about how to express a situation where a subgraph
of two adjacent Relay operators may match two patterns in the graph
partitioning logic. could you elaborate on the exact problem you're having? I
think the ordering logic should be sufficient to handle this, but it'd
@haruhi both the approach suggested by @comaniac and the one by @Mousius might
be appropriate for you depending on the situation.
if you want to offload the _entire_ model and you want to use your own C
compiler, the `c` backend will indeed do what you want. We built a specialized
export fun
@popojames right now we only optimize at the "operator" level (post-operator
fusion). it's possible as we begin expanding optimization towards the subgraph
level, we'll need to incorporate some way of accounting for memory copy time.
however, as @tkonolige mentioned, this is somewhat difficult
glad it helped. I don't know offhand why this would be happening; it seems like
LLVM should support double. it seems likely we are misconfiguring it. if you
find anything, please feel free to send a PR!
-Andrew
---
[Visit
Topic](https://discuss.tvm.apache.org/t/llvm-backend-for-riscv-fol
hi @heatdh,
i believe the kernels are in `c_mod.imported_modules[0]`. can you try
`c_mod.imported_modules[0].save("lib.s")`?
andrew
---
[Visit
Topic](https://discuss.tvm.apache.org/t/llvm-backend-for-riscv-follow-up/10824/2)
to respond.
You are receiving this because you enabled mailin
@adavis
sorry-- PE: physical execution unit (e.g. just a generic name for cpu,
accelerator, etc)
thanks for the clarifying explanation. I think you should follow [this
discussion](https://discuss.tvm.apache.org/t/pre-rfc-additional-target-hooks/10430)
on splitting the BYOC lower and generat
The compiler output is a tree of `runtime.Module`. DSO-exportable means a
`runtime.Module` in that tree whose `type_key` is `c` or `llvm`. TVM links
directly against LLVM and invokes the LLVM APIs to generate code.
When you call `export_library`, TVM traverses the tree of `runtime.Module`.
W
hi @heatdh,
could you let me know which version of LLVM you're using, or any other steps or
scripts i could use to reproduce the problem?
thanks,
Andrew
---
[Visit
Topic](https://discuss.tvm.apache.org/t/using-llvm-target-for-riscv-incompatibility-error/10393/2)
to respond.
You are rec
hi @sahooora,
could you double-check you're driving the model with the same inputs and
parameters each time? if so, could you provide some more info such as which
revision of TVM you're using and some scripts we can use to reproduce your
error?
thanks,
Andrew
---
[Visit
Topic](https:/
Hi @JosseVanDelm ,
Thanks for the post! Some thoughts:
>Right now a lot of calls to the HWlib are very inefficient, as they require a
>lot of data reformatting on the RISC-V before being accessible to the
>accelerator. It is weird/annoying that the data layout already gets specified
>from Re
hi @dream-math @jossevandelm @Julien
Thanks for some great collaboration, seems like there is significant community
interest in merging RISC-V support. Let's lay out some steps we can follow to
make this happen, and then we can discuss timelines.
1. To start with, I propose we explicitly tes
hi @Julien,
Got it. TVM has two runtimes: a C++ runtime (used when OS present and
referenced in many of our tutorials) and a C runtime (used when OS not present;
referenced in our microTVM tutorials).
Support for this arrangement isn't complete yet but would fall under microTVM.
See the [µTV
hi @Julien ,
Do you have an operating system on the RISC-V controller?
Thanks,
Andrew
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-and-custom-accelerators/9525/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [clic
@aakah18151 sorry for the delay. hm, it seems to me like you might be somehow
allocating too much memory for your device in the runtime. you could be
overwriting the stack when you set_input. unfortunately we don't have a good
way to detect this at the moment, though Zephyr should provide you
@aakah18151 `tests/micro/qemu/test_zephyr.py` should work on STM32f746xx board.
that should test sine_model.tflite. You can run it with:
`python tests/micro/qemu/test_zephyr.py --microtvm-platforms=stm32f746xx`
is this the tutorial you're trying?
---
[Visit
Topic](https://discuss.tvm.apa
haven't posted one yet--we are doing some prerequisite work right now but will
try to post it in the next week or two.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-static-runtime-code-generator/8986/13)
to respond.
You are receiving this because you enabled mailing list mode.
@aakah18151 we don't quite have good enough debugging for this right now for me
to be certain, but based on your stack trace and that it's inside
>` [bt] (6)
>/tvm_micro_with_debugger/tvm/build/libtvm.so(tvm::runtime::RPCClientSession::AllocDataSpace(DLContext,
> unsigned long, unsigned lon
hey @zhaozilongwhu, I think this is because you don't have `set(USE_MICRO ON)`
in your `config.cmake`. `tests/scripts/task_config_build_cpu.sh` turns this
option on, which is why `task_cpp_unittest.sh` assumes that that target exists.
Andrew
---
[Visit
Topic](https://discuss.tvm.apache.o
hey @qelk123, sorry I missed this. it looks like maybe the script is requiring
you to configure the attached board serial number. This is being passed to
`hla_serial` given to openocd.
I think the root cause error is this line: `expected exactly one argument to
hl_serial `
It's been a while
Hi Mike,
We're just finishing a rewrite of the µTVM runtime and as part of that we'll
update the example code from the blog post. You can read more in this [forum
thread](https://discuss.tvm.apache.org/t/is-there-an-up-to-date-utvm-code-that-i-can-refer-to/8224).
In the meantime, if you want
hi @davide-giri,
there's been some work (see below) to optimize TVM-generated code on RISC-V. at
`main` today, there isn't anything specific to RISC-V checked-in, but i'm also
not aware of anything that would prevent you from running on RISC-V today.
could you provide some more clarification
that sounds pretty reasonable to me. I need to read more about the metadata
encoding, but it seems like we should avoid copying data out of flash.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/external-modules-in-utvm/7993/11) to
respond.
You are receiving this because you enabled m
@manupa-arm yeah exactly--the main difference is that µTVM wants a static
library by default. i'm okay with O1 (reusing export_library) so long as we
don't need to change export_library too much to accommodate µTVM (i don't
believe any changes are needed, after reviewing it here).
for my auto
@manupa-arm ah I don't necessarily think use of `fcompile` is a bad idea, but
for µTVM then that does mean that you *must* pass `fcompile`, so we just need
to make sure the API is easy/obvious enough to use (or build another API on top
of this).
re: the `SaveToBinary`: I agree that would be
hi @manupa-arm,
seems like we have two options:
O1. build a library version of this [StaticRuntime fcompile
function](https://github.com/areusch/incubator-tvm/blob/utvm-runtime/python/tvm/micro/build.py#L217)
and make `export_library` call this function to create something like a DSO
(instea
36 matches
Mail list logo