On 10/06/2021 22:41, Richard Henderson wrote:
On 6/9/21 7:10 AM, Philippe Mathieu-Daudé wrote:
+ oi = make_memop_idx(MO_UB, mmu_idx);
+ if (memop_big_endian(op)) {
+ for (i = 0; i < size; ++i) {
+ /* Big-endian load. */
+ uint8_t val8 = helper_ret_ldub_mmu(en
On 6/9/21 7:10 AM, Philippe Mathieu-Daudé wrote:
+oi = make_memop_idx(MO_UB, mmu_idx);
+if (memop_big_endian(op)) {
+for (i = 0; i < size; ++i) {
+/* Big-endian load. */
+uint8_t val8 = helper_ret_ldub_mmu(env, addr + i, oi, retaddr);
+val |= v
From: Mark Cave-Ayland
[RFC because this is currently only lightly tested and there have been some
discussions about whether this should be handled elsewhere in the memory API]
If an unaligned load is required then the load is split into 2 separate accesses
and combined together within load_help