Thanks for the feedback @tqchen @jwfromm. I'll move the the code to the 
namespace `tvm.utils.data`, and set batch_size and num_batches through the 
@property decorator. 

I do agree that future support of zero copy through DLPack is interesting, so 
it's worth considering using `tvm.runtime.ndarrays` instead of numpy arrays. 
One question I have about this, though, is whether we should store labels as 
`tvm.runtime.ndarrays` as well as the data.
If I provide a tvm.runtime.ndarray as input to a graph runtime module (or one 
of the other ways to run a relay module), is the output also a 
tvm.runtime.ndarray?

I want to make sure that the datatype of f(data) matches the datatype of the 
labels so users can directly compare them.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/dataloader-an-api-to-wrap-datasets-from-other-machine-learning-frameworks/9498/4)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/0279968a8c2ac47d7e9b1ebb5ef3450eb018d5224f7b907a5cc2feabbee850e1).

Reply via email to