I have routing tables in D that are not protected. Plus, I need to
understand better the various concurrency options the STREAMS provides.
Would D_MTPUTSHARED give me the highest throughput? Note, that messages
in a single stack must be processed in order, since I need to recover
application-specific framing from the TCP stream (and build one on the
way out). It is safer with D_MTPERMOD, though of course the least
effective.

Could you elaborate a bit on the infinite recursion, please? Are you
saying the sync queue may never get drained?

Thank you very much,

Leonti

-----Original Message-----
From: Alexander Kolbasov [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 22, 2007 8:18 PM
To: Leonti Dailis
Cc: [email protected]
Subject: Re: Putnext and qfill_syncq question 

> Here is how data
> flow looks like:
> 
> TCP -> M1 -> M2 -> D -> M3 -> M1 -> TCP
> 
> As you can see the module M1 is used on top of TCP for both incoming 
> and outgoing data. All modules use the STREAMS flags 
> (D_NEW|D_MP|D_MTPERMOD).
> 
> When a newly received packet mblk is passed by the TCP to the module 
> M1 wput, it is passed all the way to M3 via the putnext calls. I 
> traced the function call flow with a dtrace script. Then in M3, when 
> the putnext is invoked, the qfill_syncq is called. I looked at the 
> putnext source code in opensolaris, then traced it more with dtrace 
> trying to determine what causes this. I came to the conclusion that 
> since M3 is trying to invoke
> M1 again, it cannot do so (since we are still inside an M1 rput call 
> and due to MTPERMOD perimeter mode) and thus putnext calls qfill_syncq

> to push the message onto the module synchronization queue.
> 
> Is my conclusion correct? If yes, would creating a separate M4 instead

> of M1 solve this (so that the qfill_sync is not invoked?) There are 
> several reasons I don't want this behavior: normal STREAMS flow 
> control is not functioning since the normal STREAMS queue is not used 
> to back up extra traffic. Also invoking qfill_syncq on every message 
> is probably costly (though CPU utilization is almost 0) and sync queue

> can grow very quickly before it is drained, eventually causing QFULL 
> condition. Are there other approaches to solving this puzzle? Would 
> appreciate any ideas.

Do you have to use D_MTPERMOD?

You may specify D_MTPUTSHARED which will allow concurrent puts into your
module.

Under your current scenario you may run into infinite recursion: On the
way back from M3->M1 putback the stack will unwind back to M1, which
will pick up the enqueued message and call putnext() on it which will
place it in the syncq again...

Having different modules instead of a single M1 definitely helps this.

- akolb


- Scanned for SPAM by SS8 IT--
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code

Reply via email to