`topi.cuda.batch_matmul.schedule_batch_matmul ` with constant `d1` and `d2` 
gave the best performance. It is still 3x slower than PyTroch thought. 

`topi.cuda.batch_matmul.schedule_batch_matmul` is not instrumented with autotvm 
knobs, but I tried to change the few constants it has but that didn't help.





---
[Visit 
Topic](https://discuss.tvm.ai/t/optimizing-matrix-multiplication-for-gpu/4212/24)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/51051c62671257e2b0db175e7e35106e3001b3b3eb86940efd9c7d0d52333ce9).

Reply via email to