Hi hjiang,
Thank you very much for your reply! I will try to clarify the two questions you
mentioned:
> “any OpenCL-compatible devices” and “vendor-specific optimization” are
> conflict, could you give more detail about what the plan here to balance this
> 2 parts and how to reduce related c
@hcho3 looks like another symbol conflict?
---
[Visit
Topic](https://discuss.tvm.ai/t/conflict-with-xgboost-when-thrust-is-enabled/6889/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/emai
I guess that's ok. Let's see how it works and we can refine it later if needed.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/33) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https
When `USE_THRUST=ON`, unknown CUDA error happened:
```
File "/home/ubuntu/tvm/src/runtime/cuda/cuda_device_api.cc", line 108
CUDA: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading: unknown
error
```
It can be reproduced with the following script
```
import numpy as np
import tvm
In most cases we do need to generate the host code together with the device
code before we are going to run it. One way to resolve this problem is for
re-targettable build is to not specify `target_host` in the program(as they can
be optional before split-host-device), and then manually re-sp
fair pt, how about the `llvmjit` and `llvmcpu` proposal?
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/31) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/
Going back to the `target_host` question. Another argument against is that a
specific device can be present in different systems with different host
processors. This would necessitate having different targets for the same
device, if `target_host` is a part of the target description.
I don't
[quote="tqchen, post:28, topic:6844"]
Another way to think about it is that llvm itself is a target, and we happened
to have a JIT engine locally for that target.
[/quote]
This is precisely the point of view that I strongly disagree with. The code
that runs is not LLVM IR, it must be compiled
I think there is still value of JIT to be present, as a lot of our current
examples depend on it. Another way to think about it is that llvm itself is a
target, and we happened to have a JIT engine locally for that target.
We can discuss the alternatives, for example, introduce an llvmjit tar
The question is "what do we want the target to guarantee?". If we want "llvm"
to include both CPU and JIT, then it should always mean that both features are
present. Whether the target is local or not is a feature of the runtime
environment and not the compiler. On that note, I think we sho
I agree with your concern, one thing we could do is to add default set of keys
for an target id, when keys are not explicitly present. For example, cuda will
always have cuda and gpu attached to its key during creation time.
We cannot automatically add uncommon keys like tensorcore though. Bu
Right now the jit an cpu does not necessarily conflict with each other, as if
the target is local, it can be exported normally as a library, if it is a cross
compilation target, then we cannot directly execute, but still is able to
export to an library.
So llvm right now means cpu, and jit if
Keys are an important field in the target to make other modules work. Since the
target can be created from json, I'm worried if people forget to add certain
keys in the target, it might cause some undesired behavior.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6
Another thought is that we should **remove "llvm" as a target**. Right now
target = "llvm" means "cpu", but it also means "jit". We should replace it
with something that has a clear meaning, and should be independent of whether
the LLVM framework is used to generate code for it or not.
I don't think that would become a problem, under the new module serialization
https://tvm.apache.org/docs/dev/introduction_to_module_serialization.html
We will simply recover several DSOModules, all of them share the same library
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-runtime-jso
Thanks that sounds like it should be relatively straightforward to integrate.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/per-axis-quantization-support-for-tflite/6726/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
Hi Shawn_Inspur,
This RFC does not support int8, How should I make it work in int8?
Thanks
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-tensor-core-optimization-of-winograd-conv2d-on-tensor-core/6543/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubs
17 matches
Mail list logo