On Mon, 2021-02-15 at 09:19 -0600, Kinsey Moore wrote:
> From: Ryan Long <ryan.l...@oarcorp.com>
> 
> The ts-validation-0 test currently fails on 64bit BSPs due to a
> limitation of the message structure. Changing the max message size to a
> size_t type and adjusting the expected value in the test resolves this.
This talks about the message size, but the change is to the pending
count?

> Closes #4179.

This seems to be the wrong bug reference, is #4197 the correct one?


If this change is correct there are a couple of casts left over now
which maybe should be adjusted as well?:

cpukit/score/src/coremsg.c:64:    (size_t) maximum_pending_messages * 
buffer_size,
cpukit/score/src/coremsg.c:89:    (size_t) maximum_pending_messages,


I'm however wondering if this is the right way to fix this...

I'm guessing that the failure mentioned is based on this specification
in rtems-central : spec/rtems/message/req/construct-errors.yml

393     - enabled-by: true
394       post-conditions:
395         Status: InvNum
396       pre-conditions:
397         Area: all
398         AreaSize: all
399         Id:
400         - Id
401         MaxPending:
402         - Big
403         MaxSize:
404         - Valid
405         Name:
406         - Valid
407         Queues:
408         - Avail

Which in practice seems to specify that 

rtems_message_queue_create
(
name,
UINT32_MAX /* count */,
1 /* size */,
attribute_set,
&id
);

must fail with RTEMS_INVALID_NUMBER due to

117  - name: Big
118    test-code: |
119      ctx->config.maximum_pending_messages = UINT32_MAX;
120    text: |
121      The maximum number of pending messages of the message queue 
configuration
122      shall be big enough so that a calculation to get the message buffer
123      storage area size overflows.

which in the code looks like

/* Make sure the memory allocation size computation does not overflow */
if ( maximum_pending_messages > SIZE_MAX / buffer_size ) {              
  return STATUS_MESSAGE_QUEUE_INVALID_NUMBER;                           
}                                                                       

But when the SIZE_MAX is a 64bit size_t, then UINT32_MAX * (1 + buffer
overhead) cannot reasonably overflow SIZE_MAX, so this will report
success instead of the expected invalid number which is the failure
seen in the validation test, is that correct?


If so, it seems very odd to change the interface just to allow this
failure to occur.

Would it be possible to instead specify that if

SIZE_MAX >= UINT32_MAX * (1 + buffer overhead)

then this case should be skipped, or expects success?

-- 
Martin Erik Werner
Software Developer | AAC Clyde Space
aac-clydespace.com/privacy-policy

_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to