@register_annotate_function("nn.global_avg_pool2d")
def global_avg_pool2d_rewrite(ref_call, new_args, ctx):
"""Rewrite function for global_avg_pool2d for stopping quantize"""
if quantize_context().check_to_skip(ref_call):
return None
expr, x_kind = _g
Thank you for your reply
---
[Visit Topic](https://discuss.tvm.apache.org/t/is-module-thread-safe/2759/14)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe/bc47618c7b2
In some sense, I decreased batch size, it usually worked out. Did you find
some solution to solve this problem? I mean let module.run() without getting
out of cuda memory error?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/cuda-got-error-cuda-error-launch-out-of-resources/4173/6)
graph runtime functions are not thread safe since they access per runtime state
---
[Visit Topic](https://discuss.tvm.apache.org/t/is-module-thread-safe/2759/13)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https
Yes weight layout transformation should be optimized by constant folding.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-dose-tvm-elimitate-calls-of-conv-weights-layout-transform/8208/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from
for example, if we use `ctx=cuda`,
both input datas are run correctly.
I saw that default tvm.nd.array context is cpu(0),
>input = tvm.nd.array(shape)
but, I designate context "cuda"
> input = tvm.nd.array(shape, ctx)
both inputs are run correctly, but it have small performance diffs.
**Ca
is "GraphRuntime::Run" threadsafe ?
---
[Visit Topic](https://discuss.tvm.apache.org/t/is-module-thread-safe/2759/12)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe/
What will be the default tile factor in the loop for any operations
---
[Visit
Topic](https://discuss.tvm.apache.org/t/about-configspace-in-autotvm/8049/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://di
When we write tvm pass, always it's a little hard to see the result directly
and clearly, is there a good way to see stmt and faciliate the pass writing and
debugging?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/is-there-a-better-way-to-see-stmt-when-we-write-a-tvm-pass/8216/1)
to
I'm using Fortanix SGX to run models in SGX enclave. Multi-threading is
supported in SGX, but I cannot make my program run in multi-threading. Changing
the value of TVM_NUM_THREADS is useless.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/is-multi-threading-supported-when-tvm-target-
10 matches
Mail list logo