yes, thanks for the hint from this post.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/pass-sparse-tensor-to-tvm-build/7739/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/ema
Looks like it's solved, but ping me if you have other issues with sparse stuff.
I'm not as well versed as some other developers, but I have been working on it
on-and-off the past couple of months.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/pass-sparse-tensor-to-tvm-build/7739/6)
Hi I am struggling with sparse cmm usage and could you please help to take a
look at this post ?
https://discuss.tvm.apache.org/t/error-ndarray-object-has-no-attribute-a/9107
thanks
---
[Visit
Topic](https://discuss.tvm.apache.org/t/pass-sparse-tensor-to-tvm-build/7739/5)
to respond.
You
Useful resource, thanks. I ended up fixing the approach in my 2nd example
(using three `ndarray`s. Basically one can only pass sparse tensors that have
the same sparsity pattern, not just the same level of sparsity as the
placeholders you pass.
Thus, when constructing the placeholder tenso
Another work around is to get rid off the sparse placeholder completely.
Instead use three standard Tensors for each element in the sparse tensor (i.e.
data, indices and index pointers). Then feed those (plus the X matrix) to
`topi.nn.sparse_dense` and things seem to work.
There's a working i
I have also tried bypassing this issue by passing the three tensor objects
inside a sparse array.
```python
from tvm.contrib import sparse
# create placeholder tensors
...
n = out_c
k = kdim_h * kdim_w * in_c
sparse_weights = sparse.placeholder((n,k), nonzeros=(1-sparsity)*n*k, name='W')
#
>From the [discussion about running sparse
>CNNs](https://discuss.tvm.ai/t/running-a-cnn-using-sparsity-convertor/7267/11),
> I have implemented prototypes of a dense NCHW GEMM convolution, and what I
>think is a working CSR NCHW GEMM convolution.
I will share the code once it's a bit more mat