[TVM Discuss] [Questions] [Solved] How is __tvm_module_startup invoked?

2020-06-06 Thread JC Li via TVM Discuss
>From this stackoverflow >[page](https://stackoverflow.com/questions/39946715/llvm-how-to-execute-code-in-a-module-before-any-other-code), > I got the answer finally. " *You can put the code you want to run early into a function and add that function to [ `llvm.global_ctors` ](http://llvm.or

[TVM Discuss] [Questions] How is __tvm_module_startup invoked?

2020-06-05 Thread JC Li via TVM Discuss
One thing I still don't understand is how __tvm_module_startup is being called at program startup though? @liangfu? Anyone? --- [Visit Topic](https://discuss.tvm.ai/t/how-is-tvm-module-startup-invoked/6891/4) to respond. You are receiving this because you enabled mailing list mode. To u

[TVM Discuss] [Questions] How is __tvm_module_startup invoked?

2020-06-05 Thread JC Li via TVM Discuss
I think I got it. The LLVM host module compilation generates it. src/target/llvm/llvm_module.cc: ``` TVM_REGISTER_GLOBAL("target.build.llvm").set_body_typed([](IRModule mod, std::string target) { auto n = make_object(); n->Init(mod, target); return runtime::Module(n); }); ... void Init(

[TVM Discuss] [Questions] How is __tvm_module_startup invoked?

2020-06-05 Thread JC Li via TVM Discuss
@liangfu, I think this question goes to you best, :slight_smile: The above question originates when I looked at this function in src/runtime/crt/packed_func.h: ``` TVMPackedFunc* g_fexecs = 0; uint32_t g_fexecs_count = 0; // Implement TVMModule::GetFunction // Put implementation in this file s

[TVM Discuss] [Questions] How is __tvm_module_startup invoked?

2020-06-05 Thread JC Li via TVM Discuss
I'm studying apps/bundle_deploy example code which runs on crt runtime. This function was called at the beginning to pre-load all operator functions: ``` // src/runtime/crt/crt_backend_api.c: int TVMBackendRegisterSystemLibSymbol(const char* name, void* ptr) { g_fexecs = vrealloc(g_fexecs, si

[TVM Discuss] [Questions] [Solved] Schedule not registered for 'mytarget'

2020-05-15 Thread JC Li via TVM Discuss
I finally figured it out. It was because I didn't specify "keys" when I create the new 'mytarget'. Once I add 'mytarget' as the key, the dense_strategy registration works like a charm... It is really appreciated there'll be a tutorial/docs on how to add a new target, :slight_smile: --- [

[TVM Discuss] [Questions] Schedule not registered for 'mytarget'

2020-05-15 Thread JC Li via TVM Discuss
@hht, I doubled checked, it seems the real problem might be that I declared 'mytarget' as a completely new target, instead of declaring it as a device under target 'ext_dev'. Running ``` print(tvm.target.Target.current(allow_none=False) ``` Shows different result. With VTA: ``` relay/backe

[TVM Discuss] [Questions] Schedule not registered for 'mytarget'

2020-05-14 Thread JC Li via TVM Discuss
Thank you, @hht . It seems my target isn't constructed appropriately. But why it doesn't complain anything in the run before? ``` File "relay_linearnet.py", line 32, in print(tvm.target.Target.current(allow_none=False)) File "/work/git_repo/tvm/python/tvm/target/target.py", line 103,

[TVM Discuss] [Questions] Schedule not registered for 'mytarget'

2020-05-14 Thread JC Li via TVM Discuss
I added a new 'mytarget' to target list and add dense strategy registration in python/tvm/op/strategy/mytarget.py as below: ``` @dense_strategy.register("mytarget") def dense_strategy_mytarget(attrs, inputs, out_type, target): strategy = _op.OpStrategy() strategy.add_implementation(wrap_

[TVM Discuss] [Questions] Execution order of operators at Runtime in TVM

2020-05-04 Thread JC Li via TVM Discuss
Is that so Runtime won't run independent subgraphs in parallel? Or only certain runtime won't do so? I'm new to TVM, but a quick search shows some infrastructure built-in: ``` /work/git_repo/tvm/src/runtime$ grep -R -i parallel * crt/crt_backend_api.c:int TVMBackendParallelLaunch(FTVMParallelLa

[TVM Discuss] [Questions] Why FoldConstant optimization needs schedule ops?

2020-04-29 Thread JC Li via TVM Discuss
I'm stumbled by the same confusion when tracing the FoldConstant optimization pass. Some of my debug print shows the process as below. I don't know why (**anyone, please explain if you know**) but Relay kicks the 'whole compilation process' by using the Interpreter (interpreter.cc). * 'EtaEx

[TVM Discuss] [Questions] TVM terms: relay, topi, tir, te

2020-04-23 Thread JC Li via TVM Discuss
I've been studying TVM for quite a few weeks, still not crystal-clear about the relationship between these items: ***relay, tir, topi, te.*** I'll try to summarize my understanding and please correct me in my description below. Thanks in advance. 1. ***Relay*** Relay is the replacement for

[TVM Discuss] [Questions] How to map nn.conv2d to VTA?

2020-04-17 Thread JC Li via TVM Discuss
@liangfu, thanks for your reply. Those examples use tensor expression directly to construct compute and schedule, then calls vta.build(schedule, ...). I want to use relay.build() to directly compile relay IR which is closer to neural network import flow. Any idea? --- [Visit Topic](http

[TVM Discuss] [Questions] How to map nn.conv2d to VTA?

2020-04-16 Thread JC Li via TVM Discuss
I modified source code to mimic deploy_classification.py to include quantization and graph_pack() process, now the compiling process goes well until it started lowering the conv2d + relu function: ``` tvm/python/tvm/relay/backend/compile_engine.py, select_implementation(), op.name= nn.conv2d

[TVM Discuss] [Questions] How to map nn.conv2d to VTA?

2020-04-15 Thread JC Li via TVM Discuss
I'm studying the VTA design and how it is being mapped to TVM. The resnet18 tutorial is good, however, the resnet18 itself is too complicated to follow. Instead, I'm trying with a simple nn.conv2d + nn.relu network as below: ``` def conv2d(data, weight=None, **kwargs): name = kwargs.get("n

[TVM Discuss] [Questions] [Resolved] Why does VTA build has its own path?

2020-04-07 Thread JC Li via TVM Discuss
Ah, I get it. All compiling process starts with relay will call into relay.build(...), which will go through what I called the "normal build flow that starts with high-level optimizations. The process is followed up with low level optimizations mainly at TOPI level. The VTA calls vta.build

[TVM Discuss] [Questions] [Resolved] Why does VTA build has its own path?

2020-04-07 Thread JC Li via TVM Discuss
I see a path on tvm normal build path side: **tvm/python/tvm/relay/build_module.py** --> **tvm/src/relay/backend/build_module.cc** Lower(...)--> LowerInternal(...) --> **tvm/python/tvm/relay/backend/_backend.py** lower(...) --> **tvm/python/tvm/driver/build_module.py** lower(...) The last one

[TVM Discuss] [Questions] Why does VTA build has its own path?

2020-04-07 Thread JC Li via TVM Discuss
This is very confusing while I started reading the TVM source code, trying to figure out the build paths. Normal build flow seems using **tvm/python/tvm/relay/build_module.py**, which itself is a wrapper for C++ implementations under the hood, such as **tvm/src/relay/backend/build_module.cc*

[TVM Discuss] [Questions] Error in contructing IRModule from Relay c++ APIs

2020-04-05 Thread JC Li via TVM Discuss
Trying to mimic tests/cpp/relay_build_module_test.cc in construct a simple Dense + Relu + Add function as below. ``` auto tensor_type_f32_16_8 = relay::TensorType({16, 8}, DataType::Float(32)); auto tensor_type_f32_8_8 = relay::TensorType({8, 8}, DataType::Float(32)); auto a = relay::Var

[TVM Discuss] [Questions] Cached Key/Function in Lower process

2020-04-02 Thread JC Li via TVM Discuss
I noticed these terms "CCachedKey", "CCachedFunc" in lowering process in /src/relay/backend/compile_engine.cc. 1. Why is there a 'cache'? 2. What's its relationship to the lowering process? Thanks. --- [Visit Topic](https://discuss.tvm.ai/t/cached-key-function-in-lower-process/6192/1) t

[TVM Discuss] [Questions] How to specify target on macbook pro

2020-04-02 Thread JC Li via TVM Discuss
Mac hasn't used nvidia graphic card for a LONG time (>5 years?), which means your laptop doesn't support a CUDA-enabled device. You'll need a new platform to try cuda. --- [Visit Topic](https://discuss.tvm.ai/t/how-to-specify-target-on-macbook-pro/6182/2) to respond. You are receiving t

[TVM Discuss] [Questions] Relationship between strategy/compute/schedule?

2020-04-01 Thread JC Li via TVM Discuss
Ah, I find this in document: https://docs.tvm.ai/dev/relay_op_strategy.html --- [Visit Topic](https://discuss.tvm.ai/t/relationship-between-strategy-compute-schedule/6175/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click

[TVM Discuss] [Questions] Relationship between strategy/compute/schedule?

2020-04-01 Thread JC Li via TVM Discuss
I started diving a bit deep into the process from Relay to TVM IR. The **strategy** is a completely new notion popped up. All tutorials from TVM document site are focusing on **compute** and **schedule**. My understanding is **compute** defines WHAT while **schedule** defines HOW or WHEN.

[TVM Discuss] [Questions] Relationship between tvm.build() and relay.build()

2020-03-23 Thread JC Li via TVM Discuss
It seems to me relay.build() and tvm.build() are processed through completely different paths in TVM source repo. **It is appreciated anyone can help correct me or confirm**. Thanks in advance. Relay.build() call in python is quickly passed on into C++ side, mainly processed within src/relay

[TVM Discuss] [Questions] Relationship between tvm.build() and relay.build()

2020-03-23 Thread JC Li via TVM Discuss
I'm trying to understand the relationship between relay.build() and tvm.build(). @vinx13, thanks for you reply. Does this mean a neural-network import from certain framework, say MXNET (e.g. [this tutorial](https://docs.tvm.ai/tutorials/frontend/from_mxnet.html)), can only use relay.build()