The problem is whether we want to be 100% numpy compatible.

As far as I could understand, the goal and benefit of being XXX-compatible is 
to bring convenience to end users who is familiar with XXX.

As in this case, as a deep learning compiler stack, or any other DL framework, 
fp32 is used as default, because fp64 is way too expensive and not quite 
useful. This is the consensus to the end users that we are now facing.

Therefore, I would prefer use fp32 as default, although not compliant with 
numpy, this is a good choice that deep learning practitioners are comfortable 
with :-)





---
[Visit 
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/4) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/260d9f0ac5967aa135580c05a88861bb264f3f355716106fe20e563d1f6f0d98).

Reply via email to