Could you provide these parameters?

    b = tvm.var('b')  # batch size -> 12 or n
    n = tvm.var('n')  # sequence length -> 512
    h = tvm.var('h')  # number of heads -> ?
    m = tvm.var('m')  # hidden dimension -> 768
    w = tvm.var('w')  # window size -> 512
    w_upper = tvm.var('w_upper')  # window size to the right of the word. 
Should be `0` or `w` -> ?
    padding = tvm.var('padding')  # padding -> ?
    transpose_t1 = tvm.var('transpose_t1')  # t1 should be transposed -> True / 
False
    t1d3 = tvm.var('t1d3')  # last dimension of t1 -> ?
    t3d3 = tvm.var('t3d3')  # last dimension of t3 (the result tensor) -> ?





---
[Visit 
Topic](https://discuss.tvm.ai/t/developing-a-faster-schedule-for-longformers-kernel/6367/5)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/68c35d21cc36e3ea2fe966a01e3e6d9d41e9f1b00a838c29d3c4b9c3c03c9052).

Reply via email to