One, it's been a while since I looked at this code. But, Jon has made several erroneous assumptions.

1) I never said that the reader task uses a CS. It does not because only it will be updating the 'next to be processed'.

2) Who said it was not FIFO? It is. The reader task does not start at the top each pass, it just reads the next entry if it's available. If it's not available, it waits on an ECB post on the next queue entry. If the reader task finishes processing an entry, it checks the bits in the next records ECB and will not perform an unnecessary WAIT if the ECB is already posted.

3) While I have seen a "queue full", it has only been reported a few times in over 20 years I have been working with the product. The logic in the 'application tasks' will report the error, but will wait a bit and attempt a set number of times before fully failing. Jon incorrectly assumed that the code's author was not smart enough to think about such things.

Overall,

This is actually a very efficient bit of code on both sides. The client builds the queue entry locally, then grabs an entry using CS, moves the control block, posts the ECB within the queue entry, then waits for the server to post a response ECB. The server side actually just moves the data off the queue to any of multiple subtasks within it's address space that are waiting for work. The queue entry is then cleared (thus released) without waiting for the actual control block to be processed by the subtask.


Tony Thigpen

Jon wrote on 8/2/19 7:03 PM:
Has anyone ever seen a shared circular queue used in multi-tasking?

Yes. I work on a vendor product that uses a circular queue where
multiple address spaces are adding to the queue and another address
space reads from the queue.

The reader reads until the 'next to be processed' matches the 'next free
slot'. It always checks and maybe waits for the ecb in the item slot
before it reads the slot.

There is special logic during the add process to handle a "queue full"
condition. (And, yes, I have seen it triggered.)

This design should never be used in a critical component. I'm not surprised that you've 
seen "queue full". It occurs because of the circular queue design which 
eliminates many benefits of multi-tasking. Elements are processed sequentially instead of 
FIFO.

Using an ECB in every element means the elements from higher priority tasks 
must wait on the slowest task (even worse in multi-address spaces). If the 
system get's heavily loaded, you might have a long batch address space that is 
swapped out causing major delays. You'll have elements queueing up until this 
slow task finally get's swapped back in.

The second problem is abend recovery or storage overlay's which could 
potentially disable the product. I've worked on products where this is not 
tolerated. A real queue may lose a few elements but at least the system keeps 
running until you complete critical work and cleanly restart the system.

As an FYI, the reader task could use the ECB instead of comparing "next to be processed". 
Also CS is not needed for "next to be processed" because only the reader task updates it.

Jon.


Reply via email to