[Apache TVM Discuss] [Questions] How to extract tvm module

2021-12-30 Thread chenugray via Apache TVM Discuss
how to dump this graph? --- [Visit Topic](https://discuss.tvm.apache.org/t/how-to-extract-tvm-module/2167/22) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/bb6d1de

[Apache TVM Discuss] [Questions] What's specific meaning search space in TVM

2021-12-30 Thread Choi95 via Apache TVM Discuss
what's specific meaning search space in TVM and How to express search space mathematically. --- [Visit Topic](https://discuss.tvm.apache.org/t/whats-specific-meaning-search-space-in-tvm/11805/1) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from

[Apache TVM Discuss] [Questions] Generate native C code from TVM IR

2021-12-30 Thread Andrew Reusch via Apache TVM Discuss
There are a couple of different targets that output something so similar to C (e.g. CUDA, OpenCL) that some some of the functionality was extracted into a common superclass, `CodeGenC`. When you specify `target="c"`, it uses `CodeGenCHost` in `codegen_c_host.cc`. You might look at that for mor

[Apache TVM Discuss] [Questions] How to call all cores in biglittle arm cpu

2021-12-30 Thread vincentily via Apache TVM Discuss
I run my model on rk3399's cpu, it has 4 a53 core and 2 a72 core. when running the model ,I found only the big cores are occupied. My question is how to make the rest four little cores to run at the same time to speed the performace. (before autotvm, my model costs about 500ms, it is too slow)

[Apache TVM Discuss] [Questions] Can we get device_type from runtime module?

2021-12-30 Thread Xu via Apache TVM Discuss
Hello, I compiled my relay graph to a DSO library, json graph and a parameter binary file like below: graph, lib, lowered_params = relay.build(mod, target="opencl", target_host="llvm --runtime=c++, params=params) lib.export_library(os.path.join(build_dir, name+".so")) When I'm deploying this

[Apache TVM Discuss] [Questions] Bert-large masked lm pre-quantization model build failed

2021-12-30 Thread chenugray via Apache TVM Discuss
from pytorch_pretrained_bert import BertForMaskedLM import torch def main(args): bert_model_origin = BertForMaskedLM.from_pretrained("bert-large-uncased") example_tensor = torch.randint(0, 100, (1, 256)) model_int8 = torch.quantization.quantize_dynamic(bert_m