SaveToFile is more of a low-level API, that is not very consistently defined.
For most multiple module case, we use `mod.export_library` instead.
And the logic is used as follows:
- LLVM/C module use SaveToFile to save the right source code.
- Other modules, use SaveToBinary to serialize the pt
i see, i am just debating whether accelerators is the right name. perhaps
devices?
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-composite-target/7744/6) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://disc
I agree P2 is better. However, we need to be mindful that the composite can go
beyond single accelerator settings. For example, we might also want to compose
`arm_cl` and opencl on ARM GPU
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-composite-target/7744/3) to respond.
You are receivin
Thanks everyone for discussions. If there is no objection, I propose we send a
PR to the website about page.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-tvm-community-vision-statement/7601/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fro
hmm, one potential thing that we might be able to is to do a setup step before
using any vulkan device, to allow us to initialize the vulkan runtime with the
desired device and feature. This is also related to the opencl runtime where we
have ways to select the right opencl runtime
---
[
Thanks @samwyi it would be great to provide an example about what you mean by
device discovery. For example, we might be able to perform some formal setup to
select the device before start using one.
DLContext is specifically limited to the common terminology(in terms of device
id and device
While I can certainly see the value of fixed point mul, there are a few other
alternatives(simpler than fpm), which I list below
- When the scale itself is power of two, it is possible to directly turn things
into a right shift, without having to invoke any multiplication. However, given
that
Introducing fixed point mulitply in the tir seems to be a quite overkill, given
that most of the operator itself can be expressed by the basic integer
arithmetics, would it be easier to detect the pattern (of multipy shift and
round) and rewrite into the fixed point multiply?
Notably, we can
Thanks for the good summarization. One concern that I have for this case is
mainly about the coupling of the quantization part with the customized code
generator.
While the application scenario is certainly understandable. We will need to
resolve two questions, as an overall goal of the proje
cc @ziheng @weberlo who might also be interested
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-byoc-data-calibration-flow/7099/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubsc
cc @liangfu @tgall_foo
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-misra-c-changes-for-rpc-support/7098/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/e9d12bfb759c4b1
In this case the parsing is already necessary and builtin, because the numpy
convention uses the string for dtype. So we are trying to build compatibility
for interpolating with something that already exists. The types on the c++ side
is structured.
---
[Visit
Topic](https://discuss.tvm.
Some comments on the dtype, the dtype field in Tensor is actually quite
flexible(goes beyond the enumeration since arbitary vector length, bitwidth and
customized data type is also allowed). So perhaps string, or making a
structured variant makes sense. So we can continue use string for simpli
cc @merrymercy @zhiics @haichen @FrozenGene @comaniac @ajtulloch @antinucleon
@junrushao1994
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-canonicalizing-autotvm-log-format/7038/13)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emai
As an alternative, we can try to add enough overloads to make sure that the
object itself behave like an list and number so such cast is not necessary. We
have already done that for string(by having String to subclass str)
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-runtime-array-c
The proposal looks good. notably, the config will need to evolve as we migrate
to ansor, so perhaps we could try to keep it opaque, or find a way to upgrade
later.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-canonicalizing-autotvm-log-format/7038/11)
to respond.
You are receiving thi
Given that the format will likely evolve in ansor. We might need to leave
certain fields opaque, and keep things in the top level for now.
In particular, the current top level fields include:
- input (describes the computation, or workload)
- config (decribes the set of schedule configs to app
I agree, we could shoot an error in the strict mode to forbid pure object
construction
---
[Visit Topic](https://discuss.tvm.ai/t/attrs-not-inheriting-from-attrs/7029/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
he
I agree that it is good to add those attrs to the python side so that they maps
to Attrs
---
[Visit Topic](https://discuss.tvm.ai/t/attrs-not-inheriting-from-attrs/7029/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
Yes that is what I mean. Bringing mechanism to ansor so that we can allow user
to take full control or half control of the search space when necessaries
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/11)
to respond.
You are receiving this
While using ansor to deprecate AutoTVM is a strong statement, I think it is a
goal we should strive to achieve. I do not think it will replace the op
strategy though, since we need strategy for the graph level selection and
dispatching.
In particular, I would encourage us to think toward that
Thanks @merrymercy can you also post a rough system diagram of components as
well as an example API for example usages?
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/2)
to respond.
You are receiving this because you enabled mailing list
@junrushao1994 how about we list the proposal options and we see what do
everyone think? we can do it in this thread or in a separate thread
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/35) to
respond.
You are receiving this because you enabled mailing list
OK, to summarize, the actionable items include:
* convert default to fp32
* fix the float occurence to use fp32
@t-vi thanks for bringing up the topic, perhaps we can reopen your PR about
fp32 default change?
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-rel
[quote="t-vi, post:1, topic:6955"]
hat can be touchy in other parts if suddenly all consts “1” are the same thing.
[/quote]
Given that we are having a systematic discussion about A0/A1 approaches to
dynamic, perhaps we can revisit the case once we transition the reshape to the
new convention -
Indeed A1 can address the problem better:
In the proposal, there is a dyn to static pass, this pass will try to convert
constants to attributes as much as possible. After this pass, all of the
constant shape reshape will become static, and then we can apply CSE easily. Of
course, we can also
Something along that direction, in the meanwhile, seems we are converging:
- convert default to fp32 and add warning
- fix the float occurence to use fp32
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/17)
to respond.
You are receiving this because
I actually meant `TVM_STRICT_MODE` that changes the `"float"` handling behavior
to directly throw, not intercepting the warnings. This way we can cleanup the
use of `"float"` in our own codebase but still allow users to use it
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning
Here is another idea:
- “float = float32” but with a warning
- Add an env variable `TVM_STRICT_MODE` to force the usage of "float" to throw,
and enable the flag in the CI, so that we fix all the usage in our current
codebase
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning
To keep things simple, we can disallow symbolic var in attributes and force
attributes to be constant, so if the var is symbolic dependent, we should use
the dyn variant.
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/13) to respond.
You are receiving this because yo
Any thoughts about disambiguate and force given users a warning when `float` is
used(and ask them to use `float32` or `float64`?
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/7)
to respond.
You are receiving this because you enabled mailing list mo
cc @jroesch @zhiics @comaniac @liangfu @junrushao1994
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/
Yah, I think it is fair to pass in names of each variable. Another way is to
rely on the positional ordering of the constant themselves(so the variable name
is implicit).
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/30) to
respond.
You are receiving
Do you think if it is possible to do C1? since it reduces the requirement for
passing Map
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/27) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
cc @FrozenGene as it is also related to module exportation format
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/25) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discu
Indeed it is a tradeoff, and in this case there can certainly be multiple
choices. The key problem we want to answer is how do we want to expose pass
"symbol" of meta data to each engine. This is an important topi as it can
affect the serialization convention of our future packages, let us thi
Would Array of NDArray be sufficient?
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-for-byoc/6579/22) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe
We want to think about alternative ways to pass in the meta data, for example
we could call initialize using array instead of Map. While it is OK to use Map
in runtime, we will face a similar issue in ucontrollers where it is harder to
pass in a Map structure
---
[Visit
Topic](https://di
Seems we have converged on A1 with the additional clarifications in this thread
:slight_smile:
---
[Visit Topic](https://discuss.tvm.ai/t/dynamic-ops-in-relay/6909/10) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here]
@FrozenGene, @jwfromm, @zhiics it would be great if we can followup about
potential suggestion for the docs.
---
[Visit Topic](https://discuss.tvm.ai/t/add-the-document-for-tvmdsoop/6622/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from thes
I think main topic of interest here is the way we define function signatures,
not necessarily how to broaden the scope of the same function to support more
flexible inputs(e.g. `Any`)
I think the main goal of A1 concerns the semantics of the attribute. Since
attribute has always been conside
Both A0 and A1 should be able to reduce the complexity of the frontend logic,
as conversion can always goes to their dynamic variants, and then follows the
conversion promotes the dynamic variants to the static counterpart.
>From the interface design PoV. A0 somewhat creates additional duplica
In most cases we do need to generate the host code together with the device
code before we are going to run it. One way to resolve this problem is for
re-targettable build is to not specify `target_host` in the program(as they can
be optional before split-host-device), and then manually re-sp
fair pt, how about the `llvmjit` and `llvmcpu` proposal?
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/31) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/
I think there is still value of JIT to be present, as a lot of our current
examples depend on it. Another way to think about it is that llvm itself is a
target, and we happened to have a JIT engine locally for that target.
We can discuss the alternatives, for example, introduce an llvmjit tar
I agree with your concern, one thing we could do is to add default set of keys
for an target id, when keys are not explicitly present. For example, cuda will
always have cuda and gpu attached to its key during creation time.
We cannot automatically add uncommon keys like tensorcore though. Bu
Right now the jit an cpu does not necessarily conflict with each other, as if
the target is local, it can be exported normally as a library, if it is a cross
compilation target, then we cannot directly execute, but still is able to
export to an library.
So llvm right now means cpu, and jit if
I don't think that would become a problem, under the new module serialization
https://tvm.apache.org/docs/dev/introduction_to_module_serialization.html
We will simply recover several DSOModules, all of them share the same library
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-jso
some thoughts:
1. I think they should be based on keys. Ideally, we should not think about
generic dispatching but collection of strategies that can be applied. For
example, if the keys include `[gpu, cuda, tensorcore]`, then it means we can
apply all the strategies registered for these three
I like the modularized setup that decouples the meta-data from the code. It
would be great to have a brainstorm and discussion about the naming candidates
for the `PackingModule`.
Also cc @junrushao1994 @FrozenGene
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-json-runtime-fo
Thanks for the example. One of our goal is to consolidate the setting into a
single target so that the configuration becomes simple. In this case it should
be the system target.
I still think it is useful to allow an optionally `target_host`(we can also
change the name if we find a better alt
That is why we might want to have a target host field in the device target as
in the above examples. The split host device pass can pick up the target host
field and split out the host driving part into a program set to have the
`target_host`.
Due to the restrictions of the target device(e.g.
I do not disagree. The sticky pt is how do we categorize the "host
driving"(memory allocation, kernel launch parameter computation) part of the
target program.
We do not intent to categorize arbitrary CPU + GPU program as "gpu program".
Under V0, a device target(with target host) program can
If a program contains both a GPU and DSP, then the target is `composite`(which
is supported), with both of the device target's `target_host` being points to
the same host. Given that the target host is optional, we could also not
specify the target host in this case assuming the host is clear
[quote="kparzysz, post:11, topic:6844"]
a composite target looks like a better solution. As the next step I suggest
that we **drop the target host** completely. A function can be split into
multiple parts meant for different targets. Instead of explicitly designating a
certain target as a *tar
The json format is an analogy, and we certainly do not have to strictly use a
json file or string. For example, in the python API, we could directly from a
dictionary(in the style of json).
```python
target = tvm.target.create({
"id": "cuda",
"target_host": {"id" : "llvm"}
});
```
We ca
Due to the ASF policy, we only produce source release officially.
Also note that the CUDA related binaries requires EULA from NV which is not
strictly apache complaint. This is fine for our users but it creates some
barriers for releasing the binary as an ASF entity(unofficially).
We certainl
We will need the separate target ID because that is the key to target registry.
So does not make sense to make the id numeric as the names directly corresponds
to the backends in many cases(cuda, llvm). keys are needed for generic strategy
reuse as in the current autotvm.
---
[Visit Topic
Yes, that is the goal
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/7) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/92fdc179b4ceb41ab37b8b7c
it serves as a way for user to quickly specify the target.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email
Trying to capture the discussions here and here is a strawman
https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844
---
[Visit Topic](https://discuss.tvm.ai/t/target-and-attributes/6013/10) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from
In collaboration with @areusch
A target object in TVM provides all the information needed for code lowering
and code generation.
Currently, a target is specified by a string in the format of `
[attributes]`, where
the `` is defined by the name of the final code generator(llvm,
cuda, opencl).
I see, I can see us doing that as well. We would need to have a clear fallback
mechanism though for the generated files, as in cases when we want to build
runtime only module and may not have a correct `setuptools_scm` dependency
Officially, we only release source code package on stable releas
also cc @vegaluis @liangfu
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-vta-support-for-cloud-devices-opencl-compatible/6676/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubsc
I am thinking along the lines of `python version.py --scm-version` that
performs the calculation of the tag(as in the logic of `setuptools_scm`) and
update the relevant files.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-naming-scheme-for-tvm-versions-and-packages/6833/4)
to respond.
I agree that adopting the version convention `0.7.0.dev912` makes sense.
It would be useful to see if we can simply update to the `version.py` script to
do so, my take is that it is not too hard as we can invoke a few git command to
do that, and still use the same script to update versions whe
Yap, or choose TIR lowering for some sub-functions :)
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ethosn-arm-ethos-n-integration/6680/19) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/em
We could, but given that it is not strictly a bug, we can also choose not to
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](http
I want to followup that the general infrastructure of pattern matching and
rewriting does not conflict with AutoTVM.
It is important to follow a composite view to the infrastructure, and view BYOC
as natural feature by combining parts of the infrastructure together, rather
than a monotholic
Thanks Yizhi, i think it is a great idea. We have already backported a few
patches
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
her
Thanks @Menooker or the RFC. It would be great a motivation section can be
added(why b16 support) for others who does not have a background on this. In
terms of techinical choices discussions, it would be great to list the design
choices, discuss their pros and cons, and then talk about conce
Given that the old composite pattern is not yet part of a release, it might
easier to directly migrate to the new one so that we don't have to maintain two
variants of pattern languages in the spirit of reducing techinical debts.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-use-patter
I don;t think we need to do that. Just like the case of SourceModule, they are
not registered anywhere.
As the code base refactors further, we could introduce it to the target built,
when it is clear that the case of ONNX requires the IRModule to contain relay
functions instead of TIR funct
First of all, given that the topic is still relates to the Module based runtime
interface. I think it should be discussed that that thread. I am not too sure
about originality arugment.
In a nutshell though, we should always want to ask whether the features is
needed, the additional engineeri
W don't have to strictly go through the TIR part, as the target only means
IRModule-> runtime::Module. It is totally fine for target to take in IRModule
that contains relay functions. I agree that it would be useful to have a
ONNXModule as a runtime module.
---
[Visit Topic](https://discu
https://github.com/apache/incubator-tvm/pull/5572
---
[Visit
Topic](https://discuss.tvm.ai/t/ci-lint-enabling-clang-format-based-lint-checks/6170/13)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.
Please start another discuss thread for new questions(of weight serialization).
The current proposal does have a `package_params` option that packages the
weight.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/68)
to respond.
You are receivi
Note that the parameter has to be loaded into DRAM, so there is no place where
we could do partial weight load.
For memory limited scenarios like embedded devices, we would certainly need to
go for a different solution, for example directly store weights in the rodata
section to remove the ne
It would be helpful to ask why and why not when introducing new dependencies.
See some of the examples in the design decision above. Flatbuffer coould be
useful when we need to serialize a complicated set of objects, but also
introduces an additional layer of abstraction.
Given that we are on
https://github.com/apache/incubator-tvm/pull/5545
---
[Visit
Topic](https://discuss.tvm.ai/t/deprecate-opengl-webgl-in-favor-of-vulkan-webgpu/6364/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.
yes, in the case of DSO module the engine creation is a function emitted by the
codegen.
Note that my main point is about de-coupling(the meta-data(weights) from
weight) and it would be good to discuss further what the class should look
like. In terms of the code part, we could certainly allo
Thanks for the quesitions
The JSON proposal is another layer of abstraction that serves as a interpreter
for general workloads. As it defers the running of the library code by
interpreting the "bytecode" in this case defined by a json format. I understand
the objective this RFC proposes, as n
Here is an example(I also updated my code above according as there is a minor
problem), to construct the code manually
```python
mod = ModuleMetaDataWrapper(metadata)
mod.import_module(CSourceModule(dnnl_code);
mod.export_library("xyz.so")
loaded = tvm.runtime.load_module("xyz.so");
```
Afte
[quote="tqchen, post:4, topic:6579"]
this->imported_modules[0]->GetFunction("__DestroyModule"); destroy(); }
GetFunction(name) { if (name != "__InitModule" && name != "__DestroyModule") {
return this->imported_modules[0]->GetFunction(name); }
[/quote]
also cc @FrozenGene @junrushao1994
--
I think these are fair problems, and json is an OK solution for some particular
backends. However, I think it is in particular important for us to think about
the infrastructure implication in the long run. I think we want to discuss the
solution in a case by case manner.
The JSON runtime is
https://github.com/apache/incubator-tvm/pull/5506
which touches a related topic(revamped the js runtime to directly use
WebAssembly standard API).
See also how did we get around the dlopen problem using the new RPC protocol
---
[Visit Topic](https://discuss.tvm.ai/t/pure-webassembly-supp
Thanks for sharing the ideas.
https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025 is
the current proposed way for universal packaging, and that should resolve most
of the current concerns.
While it is always possible to introduce another layer of abstraction for
packag
Thanks for everyone who shared their thoughts, we will proceed to pick utils as
the canonical name.
---
[Visit Topic](https://discuss.tvm.ai/t/naming-consistency-util-vs-utils/6434/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emai
@maheshambule seems we have reached concensus, please feel free to update the
PR to reflect the discussion, we only need to support the conversion but not
the runtime part
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/19) to respond.
You are receiving this because you
Looks like we should proceed with the refactor. I opened
https://github.com/apache/incubator-tvm/issues/5490 to track this
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-conversion-from-std-string-to-tvm-string/6453/5)
to respond.
You are receiving this because you enabled mailing list
I will investigate around the next two available weekends(when impact to the CI
will be lower)
---
[Visit
Topic](https://discuss.tvm.ai/t/ci-lint-enabling-clang-format-based-lint-checks/6170/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from
[quote="tqchen, post:1, topic:6544"]
Endpoint[client@n0]
[/quote]
Here is an implementation of the above proposal.
https://github.com/apache/incubator-tvm/pull/5484
---
[Visit Topic](https://discuss.tvm.ai/t/modularize-rpc-infra/6544/2) to respond.
You are receiving this because you enabl
RPC plays a key role in the TVM's ecosystem by enabling remote profiling .
The current RPC contains two components: RPCSession that implements the
server/client logic as well as parameter translation(tranlsate a local handle
to a remote one), and RPCModule that exposes the session's low-level
Yes, textformat and round trip is on the roadmap
---
[Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/11) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/f
We are moving the pass infrastructure to use the Pass infrastructure
https://docs.tvm.ai/api/python/ir.html#module-tvm.transform
After the refactor is completed, we should be able to use the trace API to
handle IR printing both in relay and tir.
https://docs.tvm.ai/api/python/ir.html#tvm.tran
@xqdan you are right, we can also mix functions of different levels in the same
IRModule
---
[Visit Topic](https://discuss.tvm.ai/t/ir-unified-tvm-ir-infra/4801/7) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](ht
Let us keep the API connsist with the rest APIs(java, js python)
When an NDArray is returned, we just keep it as a strong reference(no explicit
attachment) The isView is only used for very limited cases (e.g. in a callback
where we only want a weakref)
---
[Visit
Topic](https://discuss.t
cc @yzhliu @haichen @jroesch @ajtulloch @liangfu @jroesch @thierry
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-improve-pull-requests-with-respect-to-bug-fixes/6529/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
he
Thanks for starting the topic. I think one thing we do need to do is to reuse
existing cpu autotvm templates and possibly tune for wasm.
The lack of dlopen in wasm is not going to go away for a while due to the
special programming model. We recently have some rough idea to get around it
and w
Here is a POC with update code for serialization compatibility
https://github.com/apache/incubator-tvm/pull/5438
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-conversion-from-std-string-to-tvm-string/6453/4)
to respond.
You are receiving this because you enabled mailing list mode.
To u
1 - 100 of 171 matches
Mail list logo