And also:

>  'compile then execute' is not enough for all the deep learning workload. For 
> example, using our partial evaluator to specialize 
> training/validation/testing data mean we must compile only after we had 
> loaded all the data.

So in DL, common practice is that we specify the input shape in an ad-hoc way. 
Particularly, in MNIST, we know that our input is in shape `(batch_size, 784)`. 
For more complicated workloads, like models containing complicated control 
flow. I don't really think loading all the data would suffice. Probably 
compilation should happen in basic block level if say the IR is in CFG (so you 
need jit)

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4054#issuecomment-538060918

Reply via email to