On Mon, Sep 01, 2025 at 06:03:41PM +0100, Peter Maydell wrote:
> Could we have a comment in this header file that documents
> what interface the test device presents to tests, please?
> Both this patch and the test-case patch are hard to
> review, because I don't know what the test device is
> trying to do or what the test code is able to assume
> about the test device.
Since the series is withdrawed.. but still I feel like I'll need to read
this series when it's picked up again, I took some time (while the brain
cache is still around..) study the code, I think I get the rough idea of
what this testdev is about. If it's gonna be picked up by anyone, hope
below could help a bit. Also for future myself..
Firstly, the testdev creates a bunch of MRs, with all kinds of the
attributes to cover all max/min access sizes possible & unaligned access &
endianess. The test cases are exactly tailored for this testdev, as the
testcase needs to know exactly which offset contains which type of MR. The
tests must be run correspondingly on the paired MR to work.
There're 16 groups of MRs / tests, each group contains below num of MRs:
#define N_OPS_LIST_LITTLE_B_VALID 80
#define N_OPS_LIST_LITTLE_B_INVALID 40
#define N_OPS_LIST_LITTLE_W_VALID 60
#define N_OPS_LIST_LITTLE_W_INVALID 30
#define N_OPS_LIST_LITTLE_L_VALID 40
#define N_OPS_LIST_LITTLE_L_INVALID 20
#define N_OPS_LIST_LITTLE_Q_VALID 20
#define N_OPS_LIST_LITTLE_Q_INVALID 10
#define N_OPS_LIST_BIG_B_VALID 80
#define N_OPS_LIST_BIG_B_INVALID 40
#define N_OPS_LIST_BIG_W_VALID 60
#define N_OPS_LIST_BIG_W_INVALID 30
#define N_OPS_LIST_BIG_L_VALID 40
#define N_OPS_LIST_BIG_L_INVALID 20
#define N_OPS_LIST_BIG_Q_VALID 20
#define N_OPS_LIST_BIG_Q_INVALID 10
That's a total of 600 (which is, N_OPS_LIST) MRs at the base address of the
testdev, specified by, for example:
-device memaccess-testdev,address=0x10000000
Each MR will be 32B (MEMACCESS_TESTDEV_REGION_SIZE) in size, sequentially
appended and installed to base address offset. All of them internally
backed by:
testdev->mr_data = g_malloc(MEMACCESS_TESTDEV_MR_DATA_SIZE);
Here, BIG/LITTLE decides the endianess of the MR, B/W/L/Q decides the
min_access_size of the MR, which isn't clear at all to me.. IIUC it's
hidden inside the skip_core() check where it skips anything except when
valid_min == required_min. So only those MRs that satisfy will be created.
And just to mention, IIUC these numbers are not random either, they are
exactly how many possible MRs that we can create under the specific
category of MR group. Changing that could either causing uninitialized MRs
(under some group) or trigger assertions saying MR list too small:
fill_ops_list:
if (idx > ops_len) {
g_assert_not_reached();
}
This is not clear either.. better document this if it will be picked up one
day.
An example for definition of N_OPS_LIST_LITTLE_B_VALID: we can create 80
such MRs when the MR is (1) LITTLE (2) min_access_size=1 (3) allows
.valid.unaligned. We'll skip the rest in skip_core() when building the
list of MRs. And yes, here (3) isn't clear either: VALID here means "the
MR allows unaligned access from API level", which implies
.valid.unaligned=true. OTOH, INVALID implies .valid.unaligned=false.
NOTE: it doesn't imply at all about .impl.unaligned.
The test case should be tailored for this device, because for each test it
will run, it'll run exactly on top of the group of MRs that the test case
pairs with.
Taking example of big_w_valid(): it'll run the test on all MRs that is (1)
BIG, (2) min_access_size=2, (3) VALID to use unaligned access, aka,
.valid.unaligned=true.
Said above, I'll also raise a question that I don't understand, on the
testdev implementation. It's about endianess of the MR and how data
endianess is processed in the test dev. Please shoot if anyone knows.
Again, taking the example of BIG write() of a test MR, I wonder why the
testdev has this:
static void memaccess_testdev_write_big(void *opaque, hwaddr addr,
uint64_t data, unsigned int size)
{
g_assert(addr + size < MEMACCESS_TESTDEV_REGION_SIZE);
void *d = (uint8_t *)opaque + addr;
stn_be_p(d, size, data);
}
It means when the ->write() triggers, it assumes "data" here is host
endianess, then convert that to MR's endianess, which is BE in this case.
But haven't we already done that before reaching ->write()? Memory core
will do the endianess conversion (from HOST -> MR endianess) already here
before reaching the write() hook, AFAICT:
memory_region_dispatch_write()
adjust_endianness(mr, &data, op);
if ((op & MO_BSWAP) != devend_memop(mr->ops->endianness)) {
Here, MO_BSWAP shouldn't normally be set. devend_memop() should read BE.
On my host (x86_64) it means it'll swap once to MR endianess. IIUC, now
the whole "data" field already in MR endianess and should be directly put
into the backend memdev storage. I don't think I understand why above
stn_be_p() is not a memcpy(). Answers welcomed..
--
Peter Xu