@YuanLin thanks for the detailed comments! My replies are below.  

**Suggested change 1**
I agree that the hyper-graph (edges connecting more than two nodes) concept 
isn't necessary. I picked that up from a comment in 
[schedule.h](https://github.com/dmlc/tvm/blob/master/include/tvm/schedule.h#L430),
 but your way is much clearer.

**Suggested change 2**
I will change this to "input tensors of the consumer stage" for consistency. 
I've tried to avoid addressing some complexity here, by ignoring the fact that 
if the consumer's input tensor isn't an output of the current stage (i.e., it 
is output by some other stage), then we don't bother to compute its TensorDom 
at this point (see 
[compute_op.cc](https://github.com/dmlc/tvm/blob/master/src/op/compute_op.cc#L220)).

**Suggested change 3**
Thanks for the suggestion, I am sure it will improve readability! I agree that 
PassDownDomain should come first, since it is much shorter and easier to 
understand than InferRootBound.

**Question 2** 
Good catch. I was using debug_keep_trivial_loop option of ScheduleOps. I will 
re-run the examples or mention that they use this option.

**Suggested change 4**
I would be happy to include your content in my PR, if that's ok with you?





---
[Visit 
Topic](https://discuss.tvm.ai/t/discuss-contributing-new-docs-for-inferbound/2151/7)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/5bb521a52f42252b44dff0903b63fb6777dc419e0acdba5e8f44e8063036d6c4).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=OgZrrKhIGoMiKhXPqu833A2

Reply via email to