[TVM Discuss] [Questions] Replicating operator with pattern rewrite got stuck in an infinite loop

2020-09-02 Thread Steve via TVM Discuss
Hi All, I was experimenting to decompose a relay expression (operator or a function) into the same kind of operator. Mainly, if I have to do vector addition, I just want to replicate the number of additions but each addition operates on part of the data. For example, to add two vectors of s

[TVM Discuss] [Questions] Traversing Relay Graph order (source to sink, sink to source)

2020-06-04 Thread Steve via TVM Discuss
@comaniac - are you assuming that user needs to extend from the ExprMutator class? I have been mostly user of TVM, and now, I'd like to spend some time to understand relay. How does this method differs from the post_order_visit function provided by TVM? [quote="comaniac, post:3, topic:6

[TVM Discuss] [Questions] PyTorch to Relay IR

2020-06-01 Thread Steve via TVM Discuss
Hi All, I am wondering if someone (or if there already exists) functionality that converts neural network module from PyTorch to Relay IR? I have seen PyTorhc/TVM project, but I am not sure if this project is converting the pytorch to Relay IR or the old TVM IR? Thanks, S. --- [Visit

[TVM Discuss] [Questions] Relay Level Tiling of Conv2d or any operator

2020-05-07 Thread Steve via TVM Discuss
Dear All, I am wondering how can I write a Relay pass that tiles conv2d by the output channels (data partitioning) in Relay graph level? For example, let us assume that I have some relay program like below, and I want to able to traverse the relay graph that contains this conv2d, and able t

[TVM Discuss] [Questions] Execution order of operators at Runtime in TVM

2020-05-06 Thread Steve via TVM Discuss
Thank you @masahi This is very helpful. But, I am more than puzzled. Let us say you have two HW units of capable of running (the original example add1 and add2). So according to your answer, the add1 and add2 CAN NOT run in parallel? Could you provide some insights and on this? Also provide

[TVM Discuss] [Questions] Execution order of operators at Runtime in TVM

2020-05-06 Thread Steve via TVM Discuss
Thank you @hht This is very useful. I have two follow up questions. 1) what is the purpose of external_mods in the LoweredOutput structure? 2) Ia m wondering if I can get more details about how the CodeGen in TVM works? I mean what is the sequence. I know it starts from Relay, and I am

[TVM Discuss] [Questions] Execution order of operators at Runtime in TVM

2020-05-05 Thread Steve via TVM Discuss
@hht - thank you again. Now it makes a kind of sense. Could you please clarify what do you mean by "Parallelism only exists in the module."? My understanding is that there is only one Module, and module contains multiple graph nodes that can run in parallel. [quote="hht, post:5, topic:657

[TVM Discuss] [Questions] Execution order of operators at Runtime in TVM

2020-05-05 Thread Steve via TVM Discuss
@hht -- this is definitely interesting. In my given example, add1 and add2 are Op types, and thus, I'd expect them to be run in parallel in a HW that is capable of running two adders ("+") in parallel. [quote="hht, post:5, topic:6572"] There is no strategy to enforce parallelism to the op_e

[TVM Discuss] [Questions] Execution order of operators at Runtime in TVM

2020-05-04 Thread Steve via TVM Discuss
@hht -- thank you very much. So is this mean, we can not enforce parallelism to GraphRuntime? If I understand correctly, it looks like GraphRuntime does not run add1 and add2 in parallel? Basically, I am wondering if there is a mechanism to enforce parallelism to GraphRuntime from the hi

[TVM Discuss] [Questions] Execution order of operators at Runtime in TVM

2020-05-03 Thread Steve via TVM Discuss
Dear All, I am wondering how the execution order of operators is defined at runtime in TVM? For example, in the following example, add1 and add2 are parallel, and how the TVM runtime schedules these on hardware? (Surely, it depends on target HW, but assuming we have a HW that its capable of

[TVM Discuss] [Questions] Relationship between json and TVM Runtime: How operators are selected for execution

2020-04-30 Thread Steve via TVM Discuss
Thank you very much. How operators are running (or being scheduled) is one thing that TVM needs documentation. I think it is important for people to understand how the graph operators are being executed because there is more parallelism in the graph level than operator level in some network

[TVM Discuss] [Questions] Relationship between json and TVM Runtime: How operators are selected for execution

2020-04-23 Thread Steve via TVM Discuss
Dear All, I am new to TVM and having trouble understanding the way TVM selects and executes operators. Question: How TVM decides which operator (add1 and add2 are parellel) to execute as follows? and where is this information (and how this information) is being figured out by TVM? ``` x1