Hello,
I have a task for implementing tvm codegen generating code from relay operators
to an external DSP accelerator with has a C API.
The accelerator API include c functions API, such as:
Void f1 (int* a, int* b char* c);
Void f2 (float* a, int_64* b char* c);
Each operator can be matched
Looks like it could be abstracted as calling a packed function...On the
low-level you may use `call_packed` as demonstrated
[here](https://github.com/apache/tvm/blob/main/tests/python/unittest/test_te_tensor.py#L187);
CC @yuchenj on the high-level IR
---
[Visit
Topic](https://discuss.tvm
Summarizing the discussion a bit here:
- There is consensus that such a bypass mechanism could be useful
- There is widespread concern of abuse. Due to this concern, it's been
suggested to also improve our CI filter to skip parts of the CI for certain
changes.
- There is consensus that committ
Sounds like a good case for https://github.com/tlc-pack/relax/issues/46
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-integration-with-external-dsp-accelerator-api/11641/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails
Closed #8976.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8976#event-5723854931
@MajorTom would BYOC + pattern matching against Relay ops work for your use
case?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tvm-integration-with-external-dsp-accelerator-api/11641/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from t
On the implementation side a bot (likely this new one:
https://github.com/tvm-bot) with triage-level permissions could ensure that
`[skip ci]` is present in the PR title if any of the commits in the PR have
skipped CI (if a commit comes later on where CI isn't skipped, the submitter
can manua
@hht Hi, I was wondering if you implemented nn. upsample in the graph_pack
process. At present, I am trying to implement Unet through VTA, but I met some
problems in the graph_pack process. I suspect the reason is nn. upsample or
torch.cat. A full description can be found in another post I