[Apache TVM Discuss] [Development] Pass sparse tensor to tvm.build

2020-09-07 Thread jfm via Apache TVM Discuss


Another work around is to get rid off the sparse placeholder completely. 
Instead use three standard Tensors for each element in the sparse tensor (i.e. 
data, indices and index pointers). Then feed those (plus the X matrix) to 
`topi.nn.sparse_dense` and things seem to work.

There's a working implementation in [this 
repo](https://github.com/ceruleangu/Block-Sparse-Benchmark):

```
@autotvm.template("benchmark/block_sparse")
def block_sparse_template(W_sp_np_data_shape, W_sp_np_indices_shape, 
W_sp_np_indptr_shape, X_np_shape):
W_data = te.placeholder(shape=W_sp_np_data_shape, dtype='float32', 
name='W_data')
W_indices = te.placeholder(shape=W_sp_np_indices_shape, dtype='int32', 
name='W_indices')
W_indptr = te.placeholder(shape=W_sp_np_indptr_shape, dtype='int32', 
name='W_indptr')
X = te.placeholder(shape=X_np_shape, dtype='float32', name='X')
Y = topi.nn.sparse_dense(X, W_data, W_indices, W_indptr)
...
```





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/pass-sparse-tensor-to-tvm-build/7739/3) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/25c4d871ca6183c7b3dfc00b30bc3e7af3fd1797038b3df3e5e05de0977a7fd9).


[Apache TVM Discuss] [Development] Pass sparse tensor to tvm.build

2020-09-07 Thread Wheest via Apache TVM Discuss


Useful resource, thanks.  I ended up fixing the approach in my 2nd example 
(using three `ndarray`s.  Basically one can only pass sparse tensors that have 
the same sparsity pattern, not just the same level of sparsity as the 
placeholders you pass. 

Thus, when constructing the placeholder tensors for compilation, one needs to 
give them sizes from the sparse data you will pass to the compiled function.

E.g. if your sparse data is in a SciPy CSR object called `W_sp_np`, your TVM 
placeholders would be constructed with:

```
W_data = te.placeholder(shape=W_sp_np.data.shape, 
dtype=str(W_sp_np.data.dtype), name='W_data')
W_indices = te.placeholder(shape=W_sp_np.indices.shape, 
dtype=str(W_sp_np.indices.dtype), name='W_indices')
W_indptr = te.placeholder(shape=W_sp_np.indptr.shape, 
dtype=str(W_sp_np.indptr.dtype), name='W_indptr')
```

My (unoptimised) sparse NCHW GEMM convolution is now working, I'll see if I can 
draft a tutorial about what I've learned once I've completed some other things.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/pass-sparse-tensor-to-tvm-build/7739/4) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/3e53da98e0c95e548259aeda3ffd4bd5850ce63a0cc2019be6305eaa7a4d71ce).


[Apache TVM Discuss] [Development] Supporting CumSum from ONNX - Use te Scan op or develop from scratch?

2020-09-07 Thread masahi via Apache TVM Discuss


Hi, I've just came across a model that requires support for ONNX CumSum op 
https://github.com/onnx/onnx/blob/master/docs/Operators.md#CumSum. The model 
comes from DETR object detection model 
https://github.com/facebookresearch/detr. Since this model doesn't need ad hoc 
object detection ops that are painful to support, I think it is a great fit for 
TVM. Our ONNX frontend (also PyTorch) only needs to implement Cumsum op.

Since TVM has support for scan operation 
https://tvm.apache.org/docs/tutorials/language/scan.html#sphx-glr-tutorials-language-scan-py,
 I'm wondering if it is a good idea to implement Relay cumsum op on top of te 
scan, or implement a new topi operator from scratch. I also want to utilize 
scan primitive from thrust to support fast cumsum on cuda.

@tqchen @kevinthesun @Laurawly @jwfromm





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/supporting-cumsum-from-onnx-use-te-scan-op-or-develop-from-scratch/7830/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/b1f0ea64da5c455627578bc564be6dcea38fc140d3c5c8902e08e38aa6daa68c).