@altanh Thanks for the input. I think you're right, knowledge of the layout is
not required, and I can remove that.
With regard to your concern about the list of ndarrays -- the ndarrays in the
list are meant to be batched (I should make this clearer in the documentation,
though). The intention is to allow DataLoaders to be used with relay mods that
take more than one input. So if we have a list that is` [ndarray1, ndarray2]`,
`ndarray1` is the first input to the relay mod, and `ndarary2 `is the second.
For a mod that takes batched inputs, the list would look like this:
`[ndarray1]`, where `ndarray1` has dimensions `(batch_size, ...)`.
Then running the graph runtime module would look something like this:
```
for data in dataloader:
for i, inp in enumerate(data):
graphmodule.set_input(i, inp)
graphmodule.run()
```
With regard to whether the batch size is necessary -- one of the algorithms
that is commonly used to pick scales and zero points uses batch size because it
calculates an average across batches. (I guess an interesting and related
question here is how we would use this calibration method with BERT, since it
doesn't have batch size).
I thought it was cleaner to package the batch size with the data coming into
the function rather than requiring a user to figure out what it is and pass it
in directly.
And additionally, anytime you are doing averaging or trying to calculate
accuracy, having the batch size easily available without having to slice it out
of your tensor with an index calculated based on what the layout is is useful.
But, for non-batched data, I agree that it doesn't make sense to have the batch
size. I'm not sure what the best solution is here. One option is that the
DataLoader could have subclass called BatchedDataLoader that has the batch_size
property. I'm open to other suggestions, though.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/dataloader-an-api-to-wrap-datasets-from-other-machine-learning-frameworks/9498/9)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe/5ec8056f10ce00df7f9833721ee61ae8f02ca0161538fea07513362de8184f4e).