> I think the main outstanding thing is the question of support here. I'd love
> for a few more folks to weigh in, tagging a few folks who may have ideas:
> @tqchen @jroesch @kparzysz-quic @u99127 @Mousius @leandron @comaniac @zhiics
> @Hzfengsy @ZihengJiang @yzhliu @masahi @icemelon
>
> broadl
> I think the main outstanding thing is the question of support here. I'd love
> for a few more folks to weigh in, tagging a few folks who may have ideas:
> @tqchen @jroesch @kparzysz-quic @u99127 @Mousius @leandron @comaniac @zhiics
> @Hzfengsy @ZihengJiang @yzhliu @masahi @icemelon
>
> broadl
@driazati - this is a really good starting point for releases and I'm very
glad this is coming together.
I had a ponder last night.
A few additional points to consider for an enhancement to this RFC or make it
clearer here in this RFC itself.
- Lifecycle of a release , what happens to a rel
How can we move this forward - this appears to be getting stalled ?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/22#issuecomment-920704274
Minor nit - the title of the RFC should really read - [RFC] Use CMSIS-NN with
TVM.
@manupa-arm , @Mousius ..
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/15#issuecomment-892150107
> > I'd suggest that "nearly done" is ambiguous? As a less ambiguous
> > alternative I'd propose always opening a tracking issue (if the RFC is big
> > enough to require it) when you raise an RFC and if it ultimately gets
> > rejected we just close the issue? This also allows code to evolve alon
@areusch - could this also get a new ci-cpu image ?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7628#issuecomment-808152945
@mbaret - could you please review this ? And once this is approved we need to
request @tqchen or @tmoreau89 to respin docker.ci_cpu images.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/
I would like to see some thought about release process and release timelines
for the TVM project . Initially would like some indication of when 0.8 is
likely to happen and future releases are likely to happen.
Is Ansor now considered fully merged into the code base ?
regards
Ramana
--
Thanks for this initiative and it is commendable towards reducing the burden
for use of the Apache TVM project.
Could you link to the Apache policy here for other folks to read and see what
other guidelines need to be investigated as I couldn't find it easily enough ?
It might also be worthwh
I'd like to see the tvmc work to be functional before we cut the release
please. I'd also like to see the Ethos-N PRs land as well.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/iss
I believe using this needs cmake 3.12 or later because of the use of
FindPython3 in your cmake modules and this would require an update to the
install source documentation as that implies a requirement of cmake > 3.5 for
building tvm.
---
[Visit Topic](https://discuss.tvm.ai/t/add-the-do
Thanks that sounds like it should be relatively straightforward to integrate.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/per-axis-quantization-support-for-tflite/6726/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
Hello there,
Welcome to the community ! AFAIK, there is nothing in place for signed int8
symmetric quantization support in the tflite frontend yet even in master :
however I believe the underlying codegeneration framework can support it with
the qnn dialect of relay based on this
https://di
This is a draft PR and only for discussion but not for merging as is.
These are a couple of commits that show a proof of concept about how we could
restructure and improve the tflite frontend. I've lightly tested these by
compiling a couple of tflite models to give me some confidence that they w
Any more opinions ?
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-improve-pull-requests-with-respect-to-bug-fixes/6529/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsub
clang-tidy certainly looks interesting as well as something deeper than
clang-format and that is likely to help us with other aspects that we may be
missing. However I'm probably a bit old-school and would probably be a bit more
careful about clang-tidy -fix ... :)
That might be the next ste
Maybe take the next steps ?
1. Do a flag day clang-format rewrite and take the one time cost for every
patch having a merge conflict ?
2. Once we are clang-format clean, we could have CI run clang-format and fail
CI instantly if there is any change in the source base compared to the pull
r
**Motivation**
We would like to move towards a world where there is a clear attempt to try and
start becoming more predictable with release cycles and what the usage of a
release is going to be . As part of this ,releases need regression fixes.
However, if the community is making releases, th
>
>
> > It would be great if we can avoid the hack into the `with_same_user`. One
> > alternative would be still pass in the `PYTEST_ADDOPTS` env variable from
> > the docker env(for development purposes) but source the setup-pytest-env
> > within each of the script.
> > This also makes the in
>
>
Thanks for the quick review.
> It would be great if we can avoid the hack into the `with_same_user`. One
> alternative would be still pass in the `PYTEST_ADDOPTS` env variable from the
> docker env(for development purposes) but source the setup-pytest-env within
> each of the script.
>
I don't like my current hack of overloading "with_same_user" for sourcing this
global environment but it seemed like the simplest hack and worked in my
environment. Obviously I don't have cuda testing in my CI or my regular test
environment, so this isn't fully clear.
--
You are receiving this
In many places having a global pytest flag is useful . For me with the
build and test of tvm , I would like to be able to globally pass in
pytest options as part of development flow or CI flows where one would
like to measure other things regularly that need measurements including
pytest coverage d
To move this forward, I spent some time over the past few days to get both
TF1.15 and TF2.x testing with our CI and ran into a few issues.
See
https://github.com/apache/incubator-tvm/pull/5392
https://github.com/apache/incubator-tvm/pull/5391
regards
Ramana
---
[Visit
Topic](https:/
@jknight - In case it wasn't obvious I do support the initiative.
Yes, the scheme you have outlined works (and seems to work) reasonably well for
information dissemenination about new features.
When there are interactive discussions in that fashion and design changes made
due to the discussio
My motivation was indeed for peer collaboration or interactive peer
conversations and an additional use of existing tools in the toolbox.
regards
Ramana
---
[Visit Topic](https://discuss.tvm.ai/t/tvm-online-meetups/6382/4) to respond.
You are receiving this because you enabled mailing lis
I think this is a good initiative. However it is quite expensive in terms of
logistics and organization.
Additionally it's probably time to think about using the slack channels more
and ensuring that conversations on slack move to the discuss forums or the PRs
once the interactive conversati
I wasn't proposing that as a solution, that is one of the options. I'm merely
stating that this is still a problem that will hit others most notably anyone
using the C backend .
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/61)
to r
So, the problem hasn't been fixed : there is a "solution" depending on the
presence of an llvm target.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/59)
to respond.
You are receiving this because you enabled mailing list mode.
To un
This won't work by default for the C backend where we don't necessarily rely on
the presence of llvm or are we saying that there needs to be an llvm solution
for the backend just to produce this constant data object always, so we do need
a general solution
Ramana
---
[Visit
Topic]
I would start with incorporating these points in the "Development Process" bits
in the TVM documentation. Will put up a pull request since no one has commented
on this in about 2 months.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/development-process-and-backporting-patches-to-rele
@tmoreau89 - I lean more towards storing the list of raw measurements.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4304#issuecomment-557749626
Thanks @tqchen , Ok, now the term "engine" makes sense.
My experience has been different. Some prefer median over mean, others min /
max and some other a geometric mean.
regards
Ramana
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it
To pick up on a couple of topics from the Pull Request.
1. I've found that keeping the raw data allows one to process it in other ways
in terms of { iteration: 1 runtime: } instead of only storing the
statistics ?
Different folks would want to do different statistics with the data and we
Cool, thanks @tqchen
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4304#issuecomment-557732023
>
>
> I think docker_tag under metadata as long as it is optional (the RFC said the
> field was required which confuses me).
>
> There are certainly cases when people want to limit the number of threads,
> e.g. use only the little cores on the phone to save power. While these could
> have bee
Should we have a known issues section ?
These are some initial thoughts from the top of my head over the last 5
minutes. I am sure that there are more.
- Tflite rounding vs tvm rounding causing differences in accuracy and
potentially off by 1 errors. For reference
https://github.com/apache/in
> **Covered frameworks for now** - TFLite and MxNet
> **Target network for now** - Inception V3 from TFLite. (I will create one for
> Mxnet)
> **Target platforms for now** - ARM and Intel (will create separate Issue as
> the project progresses)
A quick question here since I can't see this menti
Thanks for the poke, I've been investigating why it's been failing off and on
for a couple of days, however I don't yet have a gpu environment set up and
the failure doesn't happen with the cpu docker script.
I'm going to be busy for the next couple of days.
--
You are receiving this becaus
This patch adds initial support for the tflite operator split. However
I am not yet sure how to handle the axis parameter for the split
operator and support it in the test infrastructure. Putting this up for
an initial review and comment.
The split operator in tflite according to
https://www.tenso
Hi Alexander,
Thanks for your response. Ah just saw the support for REDUCE_MAX. Let me
investigate why this is failing for us with operator unsupported again.
Sorry no our models aren't open sourced. Would you know of any tools like
creduce to create smaller models that could be used as test
We've been trying to run some internal pre-quantized models with the tflite
frontend and ran into the following missing operators in the tflite frontend.
We'd like to add support for these and see if there are others in the community
who are interested in this activity to prevent any duplicati
There are some good points that are mentioned above - having a set of good
getting started guides based on different usecases would certainly be a good
starting point.
One of the things that I found hard to get started with was using tvm as a
user was the absence of canned frontends as at the
43 matches
Mail list logo