>From this stackoverflow
>[page](https://stackoverflow.com/questions/39946715/llvm-how-to-execute-code-in-a-module-before-any-other-code),
> I got the answer finally.
" *You can put the code you want to run early into a function and add that
function to [ `llvm.global_ctors`
](http://llvm.or
One thing I still don't understand is how __tvm_module_startup is being called
at program startup though? @liangfu? Anyone?
---
[Visit
Topic](https://discuss.tvm.ai/t/how-is-tvm-module-startup-invoked/6891/4) to
respond.
You are receiving this because you enabled mailing list mode.
To u
I think I got it. The LLVM host module compilation generates it.
src/target/llvm/llvm_module.cc:
```
TVM_REGISTER_GLOBAL("target.build.llvm").set_body_typed([](IRModule mod,
std::string target) {
auto n = make_object();
n->Init(mod, target);
return runtime::Module(n);
});
...
void Init(
@liangfu, I think this question goes to you best, :slight_smile:
The above question originates when I looked at this function in
src/runtime/crt/packed_func.h:
```
TVMPackedFunc* g_fexecs = 0;
uint32_t g_fexecs_count = 0;
// Implement TVMModule::GetFunction
// Put implementation in this file s
I'm studying apps/bundle_deploy example code which runs on crt runtime. This
function was called at the beginning to pre-load all operator functions:
```
// src/runtime/crt/crt_backend_api.c:
int TVMBackendRegisterSystemLibSymbol(const char* name, void* ptr) {
g_fexecs = vrealloc(g_fexecs, si
I finally figured it out. It was because I didn't specify "keys" when I create
the new 'mytarget'. Once I add 'mytarget' as the key, the dense_strategy
registration works like a charm...
It is really appreciated there'll be a tutorial/docs on how to add a new
target, :slight_smile:
---
[
@hht, I doubled checked, it seems the real problem might be that I declared
'mytarget' as a completely new target, instead of declaring it as a device
under target 'ext_dev'. Running
```
print(tvm.target.Target.current(allow_none=False)
```
Shows different result.
With VTA:
```
relay/backe
Thank you, @hht . It seems my target isn't constructed appropriately. But why
it doesn't complain anything in the run before?
```
File "relay_linearnet.py", line 32, in
print(tvm.target.Target.current(allow_none=False))
File "/work/git_repo/tvm/python/tvm/target/target.py", line 103,
I added a new 'mytarget' to target list and add dense strategy registration in
python/tvm/op/strategy/mytarget.py as below:
```
@dense_strategy.register("mytarget")
def dense_strategy_mytarget(attrs, inputs, out_type, target):
strategy = _op.OpStrategy()
strategy.add_implementation(wrap_
Is that so Runtime won't run independent subgraphs in parallel? Or only certain
runtime won't do so?
I'm new to TVM, but a quick search shows some infrastructure built-in:
```
/work/git_repo/tvm/src/runtime$ grep -R -i parallel *
crt/crt_backend_api.c:int TVMBackendParallelLaunch(FTVMParallelLa
I'm stumbled by the same confusion when tracing the FoldConstant optimization
pass. Some of my debug print shows the process as below.
I don't know why (**anyone, please explain if you know**) but Relay kicks the
'whole compilation process' by using the Interpreter (interpreter.cc).
* 'EtaEx
I've been studying TVM for quite a few weeks, still not crystal-clear about the
relationship between these items: ***relay, tir, topi, te.*** I'll try to
summarize my understanding and please correct me in my description below.
Thanks in advance.
1. ***Relay***
Relay is the replacement for
@liangfu, thanks for your reply. Those examples use tensor expression directly
to construct compute and schedule, then calls vta.build(schedule, ...). I want
to use relay.build() to directly compile relay IR which is closer to neural
network import flow.
Any idea?
---
[Visit Topic](http
I modified source code to mimic deploy_classification.py to include
quantization and graph_pack() process, now the compiling process goes well
until it started lowering the conv2d + relu function:
```
tvm/python/tvm/relay/backend/compile_engine.py, select_implementation(),
op.name= nn.conv2d
I'm studying the VTA design and how it is being mapped to TVM. The resnet18
tutorial is good, however, the resnet18 itself is too complicated to follow.
Instead, I'm trying with a simple nn.conv2d + nn.relu network as below:
```
def conv2d(data, weight=None, **kwargs):
name = kwargs.get("n
Ah, I get it.
All compiling process starts with relay will call into relay.build(...), which
will go through what I called the "normal build flow that starts with
high-level optimizations. The process is followed up with low level
optimizations mainly at TOPI level.
The VTA calls vta.build
I see a path on tvm normal build path side:
**tvm/python/tvm/relay/build_module.py** -->
**tvm/src/relay/backend/build_module.cc** Lower(...)--> LowerInternal(...) -->
**tvm/python/tvm/relay/backend/_backend.py** lower(...) -->
**tvm/python/tvm/driver/build_module.py** lower(...)
The last one
This is very confusing while I started reading the TVM source code, trying to
figure out the build paths.
Normal build flow seems using **tvm/python/tvm/relay/build_module.py**, which
itself is a wrapper for C++ implementations under the hood, such as
**tvm/src/relay/backend/build_module.cc*
Trying to mimic tests/cpp/relay_build_module_test.cc in construct a simple
Dense + Relu + Add function as below.
```
auto tensor_type_f32_16_8 = relay::TensorType({16, 8}, DataType::Float(32));
auto tensor_type_f32_8_8 = relay::TensorType({8, 8}, DataType::Float(32));
auto a = relay::Var
I noticed these terms "CCachedKey", "CCachedFunc" in lowering process in /src/relay/backend/compile_engine.cc.
1. Why is there a 'cache'?
2. What's its relationship to the lowering process?
Thanks.
---
[Visit
Topic](https://discuss.tvm.ai/t/cached-key-function-in-lower-process/6192/1) t
Mac hasn't used nvidia graphic card for a LONG time (>5 years?), which means
your laptop doesn't support a CUDA-enabled device. You'll need a new platform
to try cuda.
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-specify-target-on-macbook-pro/6182/2) to
respond.
You are receiving t
Ah, I find this in document: https://docs.tvm.ai/dev/relay_op_strategy.html
---
[Visit
Topic](https://discuss.tvm.ai/t/relationship-between-strategy-compute-schedule/6175/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
I started diving a bit deep into the process from Relay to TVM IR. The
**strategy** is a completely new notion popped up.
All tutorials from TVM document site are focusing on **compute** and
**schedule**. My understanding is **compute** defines WHAT while **schedule**
defines HOW or WHEN.
It seems to me relay.build() and tvm.build() are processed through completely
different paths in TVM source repo. **It is appreciated anyone can help correct
me or confirm**. Thanks in advance.
Relay.build() call in python is quickly passed on into C++ side, mainly
processed within src/relay
I'm trying to understand the relationship between relay.build() and
tvm.build().
@vinx13, thanks for you reply. Does this mean a neural-network import from
certain framework, say MXNET (e.g. [this
tutorial](https://docs.tvm.ai/tutorials/frontend/from_mxnet.html)), can only
use relay.build()
25 matches
Mail list logo