@yuluny2 Hi, glad to hear that you have plan to support sparse tensor. I think 
it's a good starting point for dgl team to collaborate with you, there are a 
lot of opportunity for tvm to search best schedule for sparse matrix 
operations. It would be great if relay would be powerful enough so that 
user-defined message/reduce/apply_nodes(edges)/... in dgl could be converted to 
relay.

I also agree that you should not couple sparse tensors and dense tensors. The 
sparse tensor usually act as adjacency matrix in graph neural networks, while 
dense tensor acts as features in most cases. They are quite different and there 
is no need to unify them.

Yes, CSR is not enough, you have no access to number of columns, and at least 
you should maintain the attributes in scipy: 
https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html.
 Note that there might be multiple values correspond to one position in sparse 
matrix (this is a case in multigraph, where there are more than ones edges 
between two nodes).

My concern is that should we allow user to define a favoured format for each 
sparse tensor. For different combination of sparse matrix (different density, 
different degree variance, ...) and operations (spmm, sparse attention, sparse 
softmax and so on), the best parallel strategy with its required sparse format 
is quite different. Though it would be great if autotvm could search for the 
best format, a given format would make the work much easier.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3731#issuecomment-521370100

Reply via email to