On 28/1/26 16:23, Jim Shu wrote:
'CPUTLBEntryFull.xlat_section' stores section_index in last 12 bits to
find the correct section when CPU access the IO region over the IOTLB.
However, section_index is only unique inside single AddressSpace. If
address space translation is over IOMMUMemoryRegion, it could return
section from other AddressSpace. 'iotlb_to_section()' API only finds the
sections from CPU's AddressSpace so that it couldn't find section in
other AddressSpace. Thus, using 'iotlb_to_section()' API will find the
wrong section and QEMU will have wrong load/store access.
To fix this bug of iotlb_to_section(), store complete MemoryRegionSection
pointer in CPUTLBEntryFull to replace the section_index in xlat_section.
Rename 'xlat_section' to 'xlat' as we remove last 12 bits section_index
inside. Also, since we directly use section pointer in the
CPUTLBEntryFull (full->section), we can remove the unused functions:
iotlb_to_section(), memory_region_section_get_iotlb().
This bug occurs only when
(1) IOMMUMemoryRegion is in the path of CPU access.
(2) IOMMUMemoryRegion returns different target_as and the section is in
the IO region.
Common IOMMU devices don't have this issue since they are only in the
path of DMA access. Currently, the bug only occurs when ARM MPC device
(hw/misc/tz-mpc.c) returns 'blocked_io_as' to emulate blocked access
handling. Upcoming RISC-V wgChecker [1] and IOPMP [2] devices are also
affected by this bug.
[1] RISC-V WG:
https://patchew.org/QEMU/[email protected]/
[2] RISC-V IOPMP:
https://patchew.org/QEMU/[email protected]/
Signed-off-by: Jim Shu <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Also implicit on v1:
Tested-by: Mark Burton <[email protected]>
---
accel/tcg/cputlb.c | 22 ++++++++++------------
include/accel/tcg/iommu.h | 15 ---------------
include/exec/cputlb.h | 4 ++--
include/hw/core/cpu.h | 17 +++++++++--------
system/physmem.c | 25 -------------------------
5 files changed, 21 insertions(+), 62 deletions(-)
diff --git a/include/accel/tcg/iommu.h b/include/accel/tcg/iommu.h
index 90cfd6c0ed1..547f8ea0ef0 100644
--- a/include/accel/tcg/iommu.h
+++ b/include/accel/tcg/iommu.h
@@ -14,18 +14,6 @@
#include "exec/hwaddr.h"
#include "exec/memattrs.h"
-/**
- * iotlb_to_section:
- * @cpu: CPU performing the access
- * @index: TCG CPU IOTLB entry
- *
- * Given a TCG CPU IOTLB entry, return the MemoryRegionSection that
- * it refers to. @index will have been initially created and returned
- * by memory_region_section_get_iotlb().
- */
-MemoryRegionSection *iotlb_to_section(CPUState *cpu,
- hwaddr index, MemTxAttrs attrs);
Glad you removed this, we should not had involved CPUState here.
-
MemoryRegionSection *address_space_translate_for_iotlb(CPUState *cpu,
int asidx,
hwaddr addr,
@@ -34,8 +22,5 @@ MemoryRegionSection
*address_space_translate_for_iotlb(CPUState *cpu,
MemTxAttrs attrs,
int *prot);
-hwaddr memory_region_section_get_iotlb(CPUState *cpu,
- MemoryRegionSection *section);
-
#endif
diff --git a/include/exec/cputlb.h b/include/exec/cputlb.h
index 0d1d46429c9..3a9603a6965 100644
--- a/include/exec/cputlb.h
+++ b/include/exec/cputlb.h
@@ -44,8 +44,8 @@ void tlb_reset_dirty_range_all(ram_addr_t start, ram_addr_t
length);
* @full: the details of the tlb entry
*
* Add an entry to @cpu tlb index @mmu_idx. All of the fields of
- * @full must be filled, except for xlat_section, and constitute
- * the complete description of the translated page.
+ * @full must be filled, except for xlat_offset & section, and
+ * constitute the complete description of the translated page.
*
* This is generally called by the target tlb_fill function after
* having performed a successful page table walk to find the physical
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 61da2ea4331..98678704a64 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -219,15 +219,16 @@ typedef uint32_t MMUIdxMap;
*/
struct CPUTLBEntryFull {
/*
- * @xlat_section contains:
- * - in the lower TARGET_PAGE_BITS, a physical section number
- * - with the lower TARGET_PAGE_BITS masked off, an offset which
- * must be added to the virtual address to obtain:
- * + the ram_addr_t of the target RAM (if the physical section
- * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
- * + the offset within the target MemoryRegion (otherwise)
+ * @xlat_offset: TARGET_PAGE_BITS aligned offset which must be added to
+ * the virtual address to obtain:
+ * + the ram_addr_t of the target RAM (if the physical section
+ * number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
+ * + the offset within the target MemoryRegion (otherwise)
*/
- hwaddr xlat_section;
+ hwaddr xlat_offset;
+
+ /* @section contains physical section. */
+ MemoryRegionSection *section;
LGTM!