Helge Deller <[email protected]> writes:
> When the emulated CPU reads or writes to a memory location
> a) for which no read/write permissions exists, *and*
> b) the access happens unaligned (non-natural alignment),
> then the CPU should either
> - trigger a permission fault, or
> - trigger an unalign access fault.
>
> In the current code the alignment check happens before the memory
> permission checks, so only unalignment faults will be triggered.
>
> This behaviour breaks the emulation of the PARISC architecture, where the CPU
> does a memory verification first. The behaviour can be tested with the
> testcase
> from the bugzilla report.
>
> Add the necessary code to allow PARISC and possibly other architectures to
> trigger a memory fault instead.
>
> Signed-off-by: Helge Deller <[email protected]>
> Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=219339
>
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 117b516739..dd1da358fb 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1684,6 +1684,26 @@ static void mmu_watch_or_dirty(CPUState *cpu,
> MMULookupPageData *data,
> data->flags = flags;
> }
>
> +/* when accessing unreadable memory unaligned, will the CPU issue
> + * a alignment trap or a memory access trap ? */
> +#ifdef TARGET_HPPA
> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 1
> +#else
> +# define CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK 0
> +#endif
I'm pretty certain we don't want to be introducing per-guest hacks into
the core cputlb.c code when we are aiming to make it a compile once
object.
I guess the real question is where could we put this flag? My gut says
we should expand the MO_ALIGN bits in MemOp to express the precedence or
not of the alignment check in relation to permissions.
> +
> +static void mmu_check_alignment(CPUState *cpu, vaddr addr,
> + uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
> +{
> + unsigned a_bits;
> +
> + /* Handle CPU specific unaligned behaviour */
> + a_bits = get_alignment_bits(l->memop);
> + if (addr & ((1 << a_bits) - 1)) {
> + cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
> + }
> +}
> +
> /**
> * mmu_lookup: translate page(s)
> * @cpu: generic cpu state
> @@ -1699,7 +1719,6 @@ static void mmu_watch_or_dirty(CPUState *cpu,
> MMULookupPageData *data,
> static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
> uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
> {
> - unsigned a_bits;
> bool crosspage;
> int flags;
>
> @@ -1708,10 +1727,8 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr,
> MemOpIdx oi,
>
> tcg_debug_assert(l->mmu_idx < NB_MMU_MODES);
>
> - /* Handle CPU specific unaligned behaviour */
> - a_bits = get_alignment_bits(l->memop);
> - if (addr & ((1 << a_bits) - 1)) {
> - cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra);
> + if (!CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK) {
Then this would be something like:
if (!(memop & MO_ALIGN_PP)) or something
> + mmu_check_alignment(cpu, addr, ra, type, l);
> }
>
> l->page[0].addr = addr;
> @@ -1760,6 +1777,10 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr,
> MemOpIdx oi,
> tcg_debug_assert((flags & TLB_BSWAP) == 0);
> }
>
> + if (CPU_ALIGNMENT_CHECK_AFTER_MEMCHECK) {
> + mmu_check_alignment(cpu, addr, ra, type, l);
> + }
> +
> /*
> * This alignment check differs from the one above, in that this is
> * based on the atomicity of the operation. The intended use case is
--
Alex Bennée
Virtualisation Tech Lead @ Linaro