# Model Library Format
## Background
TVM's build process for imported models centers around `tvm.relay.build`, a
function which produces a 3-tuple `(graph_json, lib, params)`. The inference
workflow then diverges depending on how the user wants to use the compiled
artifacts:
- If the build
Neat! This is a really good explanation. I think I get most everything you are
explaining. (I'm still stuck on `reverse_compute_at` which seems like a long
name, and is still a bit too magical for me to understand.)
In terms of splitting / reordering, my goofy thought is that my favorite
cons
Hey @srush,
Thanks for your valuable feedback! Please allow me to try to explain the design
rationale below:
> **Q1.** There are strings for get_block?
Yeah. One design principle we hold for TensorIR is that all needed for
scheduling is contained in the TensorIR python syntax, so that there
Hi TVM,
This idea is so cool, I think it is going to make it possible for mortals to
use TVM effectively.
I have a couple of questions about the snippet of the scheduling language.
The three issues I have when programming TVM are:
* Too many variables in scope with meaningless names, and fo
What an amazing answer! Thank you so much for your time and thoughtfulness.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/thoughts-on-a-simpler-scheduling-language/9110/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [cl