Justin,

     Are you providing matrix entries that connect  directly one vertex to 
another vertex ACROSS an edge? I don't think that is supported by the DMNetwork 
model. The assumption is that edges are only connected to vertices and vertices 
are only connected to neighboring edges.

  Everyone,

  I second Matt's reply. 

  How is the DMNetwork preallocating for the Jacobian? Does it take into 
account coupling between neighboring vertices/edges? Or does it assume no 
coupling. Or assume full coupling. If it assumes no coupling and the user has a 
good amount of coupling it will be very slow. 

  There would need to be a way for the user provide the coupling information 
between neighboring vertices/edges if it assumes no coupling.

    Barry


> On May 8, 2019, at 7:44 AM, Matthew Knepley via petsc-users 
> <petsc-users@mcs.anl.gov> wrote:
> 
> On Wed, May 8, 2019 at 4:45 AM Justin Chang via petsc-users 
> <petsc-users@mcs.anl.gov> wrote:
> Hi guys,
> 
> I have a fully working distribution system solver written using DMNetwork, 
> The idea is that each electrical bus can have up to three phase nodes, and 
> each phase node has two unknowns: voltage magnitude and angle. In a 
> completely balanced system, each bus has three nodes, but in an unbalanced 
> system some of the buses can be either single phase or two-phase.
> 
> The working DMNetwork code I developed, loosely based on the SNES 
> network/power.c, essentially represents each vertex as a bus. 
> DMNetworkAddNumVariables() function will add either 2, 4, or 6 unknowns to 
> each vertex. If every single bus had the same number of variables, the mat 
> block size = 2, 4, or 6, and my code is both fast and scalable. However, if 
> the unknowns per DMNetwork vertex unknowns are not the same across, then my 
> SNESFormJacobian function becomes extremely extremely slow. Specifically, the 
> MatSetValues() calls when the col/row global indices contain an offset value 
> that points to a neighboring bus vertex. 
> 
> I have never seen MatSetValues() be slow unless it is allocating. Did you 
> confirm that you are not allocating, with -info?
> 
>   Thanks,
> 
>      MAtt
>  
> Why is that? Is it because I no longer have a uniform block structure and 
> lose the speed/optimization benefits of iterating through an AIJ matrix? I 
> see three potential workarounds:
> 
> 1) Treat every vertex as a three phase bus and "zero out" all the unused 
> phase node dofs and put a 1 in the diagonal. The problem I see with this is 
> that I will have unnecessary degrees of freedom (aka non-zeros in the 
> matrix). From the distribution systems I've seen, it's possible that  
> anywhere from 1/3 to 1/2 of the buses will be two-phase or less, meaning I 
> may have nearly twice the amount of dofs than necessary if I wanted to 
> preserve the block size = 6 for the AU mat.
> 
> 2) Treat every phase node as a vertex aka solve a single-phase power flow 
> solver. That way I guarantee to have a block size = 2, this is what 
> Domenico's former student did in his thesis work. The problem I see with this 
> is that I have a larger graph, which can take more time to setup and 
> parallelize.
> 
> 3) Create a "fieldsplit" where I essentially have three "blocks" - one for 
> buses with all three phases, another for buses with only two phases, one for 
> single-phase buses. This way each block/fieldsplit will have a consistent 
> block size. I am not sure if this will solve the MatSetValues() issues, but 
> it's, but can anyone give pointers on how to go about achieving this?
> 
> Thanks,
> Justin
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/

Reply via email to