Sorry Tony, I'm sure the author knows what they were doing because they were able to get this to work. Multi-tasking problems are easily overlooked. Case in point are circular queues. There are fatal flaws that cause unnecessary failures. This implementation demonstrates one of those flaws.
In this case, a system running at 100% will cause unnecessary failures or unnecessary delays in critical batch. A low priority task using this queue will stop active high priority tasks (probably critical long running batch). These failures are at the worst time possible because critical long running batch will fail. I'm sorry if I implied the author did not consider queue full. In this case, queue full recovery is useless because it will fail 99% of the time. It's unlikely the low priority task gets swapped-in before failure. As for reporting queue full, it occurs far more than reported. Customer's work around the problem by not allowing 100%. Customers get accustomed to the error. Customers look at the problem database and see it's already been reported so why bother reporting. An unreported error will be unexplained delays (programs runs longer than usual). Delays are very difficult for customers to attribute to a specific program. Typically in multi-tasking, a FIFO queue is when the work is ready to be dispatched (not when the buffer is obtained). Maybe this is a FAFO queue (First Allocated First Out). Elements queued (ECB posted) are waiting. This causes a backlog. I did not say (or imply) the program starts from the top instead of waiting on the ECB. Jon. > On Friday, August 2, 2019, 04:33:50 PM PDT, Tony Thigpen <[email protected]> > wrote: One, it's been a while since I looked at this code. But, Jon has made several erroneous assumptions. 1) I never said that the reader task uses a CS. It does not because only it will be updating the 'next to be processed'. 2) Who said it was not FIFO? It is. The reader task does not start at the top each pass, it just reads the next entry if it's available. If it's not available, it waits on an ECB post on the next queue entry. If the reader task finishes processing an entry, it checks the bits in the next records ECB and will not perform an unnecessary WAIT if the ECB is already posted. 3) While I have seen a "queue full", it has only been reported a few times in over 20 years I have been working with the product. The logic in the 'application tasks' will report the error, but will wait a bit and attempt a set number of times before fully failing. Jon incorrectly assumed that the code's author was not smart enough to think about such things. Overall, This is actually a very efficient bit of code on both sides. The client builds the queue entry locally, then grabs an entry using CS, moves the control block, posts the ECB within the queue entry, then waits for the server to post a response ECB. The server side actually just moves the data off the queue to any of multiple subtasks within it's address space that are waiting for work. The queue entry is then cleared (thus released) without waiting for the actual control block to be processed by the subtask.
