On 10/15/25 10:26 AM, Tanmay Shah wrote:


On 10/8/25 11:49 AM, Tanmay Shah wrote:


On 10/7/25 2:58 PM, Jassi Brar wrote:
On Tue, Oct 7, 2025 at 10:19 AM Tanmay Shah <[email protected]> wrote:

Sometimes clients need to know if mailbox queue is full or not before
posting new message via mailbox. If mailbox queue is full clients can
choose not to post new message. This doesn't mean current queue length
should be increased, but clients may want to wait till previous Tx is
done. Introduce variable per channel to track available msg slots.
Clients can check this variable and decide not to send new message if
it is 0. This  can help avoid false positive warning from mailbox
framework "Try increasing MBOX_TX_QUEUE_LEN".

Signed-off-by: Tanmay Shah <[email protected]>
---

v2:
   - Separate patch for remoteproc subsystem.
   - Change design and introduce msg_slot_ro field for each channel
     instead of API. Clients can use this variable directly.
   - Remove mbox_queue_full API

  drivers/mailbox/mailbox.c          | 3 +++
  include/linux/mailbox_controller.h | 2 ++
  2 files changed, 5 insertions(+)

diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
index 5cd8ae222073..c2e187aa5d22 100644
--- a/drivers/mailbox/mailbox.c
+++ b/drivers/mailbox/mailbox.c
@@ -35,6 +35,7 @@ static int add_to_rbuf(struct mbox_chan *chan, void *mssg)
         idx = chan->msg_free;
         chan->msg_data[idx] = mssg;
         chan->msg_count++;
+       chan->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);

         if (idx == MBOX_TX_QUEUE_LEN - 1)
                 chan->msg_free = 0;
@@ -70,6 +71,7 @@ static void msg_submit(struct mbox_chan *chan)
                 if (!err) {
                         chan->active_req = data;
                         chan->msg_count--;
+                       chan->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);

No, I had suggested adding this info in client structure.
There is no point in carrying msg_count and msg_slot_ro in mbox_chan.
The client needs this info but can/should not access the chan internals.


Hi Jassi,

If I move msg_slot_ro to mbox_client that means, each channel needs its own client structure. But it is possible to map single client to multiple channels.

How about if I rename msg_slot_ro to msg_slot_tx_ro -> that says this field should be used only for "tx" channel.

Or is it must to map unique client data structure to each channel? If so, can I update mbox_client structure documentation ?


Hi Jassi,

I looked into this further. Looks like it's not possible to introduce msg_slot_ro in the client data structure as of today. Because multiple drivers are sharing same client for "tx" and "rx" both channels. [references: 1, 2, 3]

so, msg_slot_ro won't be calculated correctly I believe.

I can change architecture for xlnx driver i.e. assign separate clients for each channel, but still it won't work for other platforms. Please let me know how to proceed further.

Can we provide API that does (MBOX_TX_QUEUE_LEN - chan->msg_count) == 0?

I am not sure if I should attempt to change the architecture of other platform's drivers.


References:

[1] https://git.kernel.org/pub/scm/linux/kernel/git/remoteproc/ linux.git/tree/drivers/remoteproc/imx_rproc.c?h=for-next#n768

[2] https://git.kernel.org/pub/scm/linux/kernel/git/remoteproc/ linux.git/tree/drivers/remoteproc/xlnx_r5_remoteproc.c?h=for-next#n280

[3] https://git.kernel.org/pub/scm/linux/kernel/git/remoteproc/ linux.git/tree/drivers/remoteproc/st_remoteproc.c?h=for-next#n386

Thank you.



Hello Jassi,

Gentle reminder on this matter. I am waiting for your input on above.

Thank You,
Tanmay

Thanks,
Tanmay.

thanks
jassi





Reply via email to