Hi Eric,

On 2026/3/3 17:41, Eric Auger wrote:
Hi Tao,

On 2/21/26 11:16 AM, Tao Tang wrote:
Enhance the page table walker to correctly handle secure and non-secure
memory accesses. This change introduces logic to select the appropriate
address space and enforce architectural security policies during walks.

The page table walker now correctly processes Secure Stage 1
translations. Key changes include:

- The get_pte function now uses the security context to fetch table
entries from either the Secure or Non-secure address space.

- The stage 1 walker tracks the security state, respecting the NSCFG
and NSTable attributes. It correctly handles the hierarchical security
model: if a table descriptor in a secure walk has NSTable=1, all
subsequent lookups for that walk are forced into the Non-secure space.
This is a one-way transition, as specified by the architecture.

- The final TLB entry is tagged with the correct output address space,
ensuring proper memory isolation.

Note: We do not yet support secure stage 2 translations. So ns_as member
in SMMUTransCfg is used to cache non-secure AS instead of refactoring
smmu_ptw_64_s2 to pass SMMUState context.

Signed-off-by: Tao Tang<[email protected]>
---
  hw/arm/smmu-common.c | 50 ++++++++++++++++++++++++++++++++++++++++----
  hw/arm/smmuv3.c      |  1 +
  2 files changed, 47 insertions(+), 4 deletions(-)

diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index a732303b28b..84e71df6767 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -406,15 +406,16 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
  /**
   * get_pte - Get the content of a page table entry located at
   * @base_addr[@index]
+ * @as: AddressSpace to read from
while at it let's properly document all parameters in the doc comment.


I'll doc attrs in V5.

   */
  static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
-                   SMMUPTWEventInfo *info)
+                   SMMUPTWEventInfo *info, AddressSpace *as, MemTxAttrs attrs)
  {
      int ret;
      dma_addr_t addr = baseaddr + index * sizeof(*pte);
/* TODO: guarantee 64-bit single-copy atomicity */
-    ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
+    ret = ldq_le_dma(as, addr, pte, attrs);
if (ret != MEMTX_OK) {
          info->type = SMMU_PTW_ERR_WALK_EABT;
@@ -538,6 +539,9 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
      SMMUStage stage = cfg->stage;
      SMMUTransTableInfo *tt = select_tt(cfg, iova);
      uint8_t level, granule_sz, inputsize, stride;
+    int nscfg, current_ns, new_nstable;
+    bool sid_is_ns = cfg->sec_sid == SMMU_SEC_SID_NS;
+    bool forced_ns = false;  /* Once true, NSTable is ignored */
if (!tt || tt->disabled) {
          info->type = SMMU_PTW_ERR_TRANSLATION;
@@ -552,6 +556,8 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
baseaddr = extract64(tt->ttb, 0, cfg->oas);
      baseaddr &= ~indexmask;
+    nscfg = tt->nscfg;
+    forced_ns = sid_is_ns || nscfg;
do you really need this forced_ns
can't you simply set current_ns = sid_is_ns || nscfg
with first value of nscfg set to tt->nscfg and next level ones set to
aptable is !current_ns


You're right. forced_ns could be simplified.


while (level < VMSA_LEVELS) { uint64_t subpage_size = 1ULL << level_shift(level, granule_sz); @@ -560,8 +566,17 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg, uint64_t pte, gpa; dma_addr_t pte_addr = baseaddr + offset * sizeof(pte); uint8_t ap; + AddressSpace *pte_as; + MemTxAttrs pte_attrs; - if (get_pte(baseaddr, offset, &pte, info)) { + /* + * Start in NS for Non-secure streams or CD.NSCFGx == 1. + * Once walk is in NS, NSTable is ignored on subsequent levels. + */ + current_ns = forced_ns || nscfg; + pte_as = current_ns ? &bs->memory_as : &bs->secure_memory_as;
+        pte_attrs = current_ns ? MEMTXATTRS_UNSPECIFIED : cfg->txattrs;
+        if (get_pte(baseaddr, offset, &pte, info, pte_as, pte_attrs)) {
                  goto error;
          }
          trace_smmu_ptw_level(stage, level, iova, subpage_size,
@@ -586,6 +601,23 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
                      goto error;
                  }
              }
+
+            /*
+             * NSTable can switch the walk to NS only while the current walk
+             * level is Secure. Once switched to NS, NSTable is ignored 
according
+             * to hierarchical control of Secure/Non-secure accesses:
+             * (IHI 0070G.b)13.4.1 Stage 1 page permissions and
+             * (DDI 0487H.a)D8.4.2 Control of Secure or Non-secure memory 
access
+             */
+            if (!forced_ns) {
+                new_nstable = PTE_NSTABLE(pte);
+                if (new_nstable) {
+                    forced_ns = true;
+                    nscfg = 1;
+                } else {
+                    nscfg = 0;
+                }
+            }
              level++;
              continue;
          } else if (is_page_pte(pte, level)) {
@@ -628,6 +660,13 @@ static int smmu_ptw_64_s1(SMMUState *bs, SMMUTransCfg *cfg,
              goto error;
          }
+ if (current_ns) {
+            tlbe->sec_sid = SMMU_SEC_SID_NS;
+        } else {
+            tlbe->sec_sid = PTE_NS(pte) ? SMMU_SEC_SID_NS : SMMU_SEC_SID_S;
+        }
+        tlbe->entry.target_as = (tlbe->sec_sid == SMMU_SEC_SID_S)
+                                ? &bs->secure_memory_as : &bs->memory_as;
So I understand you need to set the target_as depending on sec_sid but
not sure why need to store the sec_sid in the tlbe?


I will directly use a local sec_sid instead to generate target_as and drop sec_sid in tlbe as the previous thread mentioned.


          tlbe->entry.translated_addr = gpa;
          tlbe->entry.iova = iova & ~mask;
          tlbe->entry.addr_mask = mask;
@@ -697,7 +736,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
          dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
          uint8_t s2ap;
- if (get_pte(baseaddr, offset, &pte, info)) {
+        if (get_pte(baseaddr, offset, &pte, info, cfg->ns_as,
+                    MEMTXATTRS_UNSPECIFIED)) {
                  goto error;
          }
          trace_smmu_ptw_level(stage, level, ipa, subpage_size,
@@ -750,6 +790,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
              goto error_ipa;
          }
+ tlbe->sec_sid = SMMU_SEC_SID_NS;
+        tlbe->entry.target_as = cfg->ns_as;
Shouldn't you handle STE.NSCFG in case STE belongs to the secure stream
table (ns_sid=secure) and stage 1 is disabled?

Also shall we handle SMMU_IDR1.ATTR_PERMS_OVR?


I indeed ignored all Secure-related handling in stage-2, as I originally planned to implement those parts in the upcoming S_IDR1.SEL2 feature series.


But the senario you mentioned seems to unrelated to SEL2 feature afer I rechecked the manual. So I'll implement it in V5 with handling SMMU_IDR1.ATTR_PERMS_OVR.


Best regards,

Tao


Reply via email to