On 2/4/26 08:44, Philippe Mathieu-Daudé wrote:
On 12/1/26 09:56, Paolo Bonzini wrote:
On 1/10/26 00:23, Richard Henderson wrote:
I'm not fond of the pointer arithmetic or the code structure.
Perhaps better as
switch (mop & (MO_BSWAP | MO_SIZE)) {
case MO_LEUW:
return lduw_le_p(ptr);
case MO_BEUW:
return lduw_be_p(ptr);
...
default:
g_assert_not_reached();
}
which would hopefully compile to host endian-swapping load insns like
.L1:
mov (ptr), %eax
ret
.L2:
movbe (ptr), %eax
ret
It only might do so for 32-bits, because movbe also bundles a free 32- >64-bit zero
extension, but not for the smaller ones. Thinking about which, I think ldm_p also needs
to handle MO_SIGN? It can be done all in one with
static inline uint64_t ldm_p(const void *ptr, MemOp mop)
{
const unsigned size = memop_size(mop);
uint64_t val;
uint8_t *pval = (uint8_t *)&val;
if (HOST_BIG_ENDIAN) {
pval += sizeof(val) - size;
}
assert(size < 8);
__builtin_memcpy(pval, ptr, size);
if (mop & MO_BSWAP) {
val = __builtin_bswap64(val);
} else if (mop & MO_SIGN) {
Can't we have both MO_BSWAP && MO_SIGN bits set?
val <<= (64 - 8 * size);
} else {
return val;
}
if (mop & MO_SIGN) {
return ((int64_t) val) >> (64 - 8 * size);
} else {
return val >> (64 - 8 * size);
}
Yes, but that's handled by always using bswap64 so that the lsb ends up at the msb, and
the shift takes care of the sign extend.
r~