For ram memory region the iotlb(which will be filled into the xlat_section
of CPUTLBEntryFull) is calculated as:

iotlb = memory_region_get_ram_addr(section->mr) + xlat;

1) xlat here is the offset_within_region of a MemoryRegionSection, which maybe
not TARGET_PAGE_BITS aligned.
2) The ram_addr_t returned by memory_region_get_ram_addr is always
HOST PAGE ALIGNED.

So we cann't assert the sum of them is TARGET_PAGE_BITS aligend.
A fail case has been give by the link:
https://lore.kernel.org/all/[email protected]/T/

Fixes: dff1ab68d8c5 ("accel/tcg: Fix the comment for CPUTLBEntryFull")
Signed-off-by: LIU Zhiwei <[email protected]>
---
 accel/tcg/cputlb.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index db3f93fda9..7a50a21a2e 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1168,7 +1168,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
     write_flags = read_flags;
     if (is_ram) {
         iotlb = memory_region_get_ram_addr(section->mr) + xlat;
-        assert(!(iotlb & ~TARGET_PAGE_MASK));
         /*
          * Computing is_clean is expensive; avoid all that unless
          * the page is actually writable.
@@ -1231,9 +1230,8 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
 
     /* refill the tlb */
     /*
-     * When memory region is ram, iotlb contains a TARGET_PAGE_BITS
-     * aligned ram_addr_t of the page base of the target RAM.
-     * Otherwise, iotlb contains
+     * When memory region is ram, iotlb contains ram_addr_t of the page base
+     * of the target RAM. Otherwise, iotlb contains
      *  - a physical section number in the lower TARGET_PAGE_BITS
      *  - the offset within section->mr of the page base (I/O, ROMD) with the
      *    TARGET_PAGE_BITS masked off.
-- 
2.17.1


Reply via email to