On Tue, 7 Nov 2023 at 03:17, Richard Henderson
<[email protected]> wrote:
>
> From: Helge Deller <[email protected]>
>
> The previous decoding misnamed the bit it called "local".
> Other than the name, the implementation was correct for pa1.x.
> Rename this field to "tlbe".
>
> PA2.0 adds (a real) local bit to PxTLB, and also adds a range
> of pages to flush in GR[b].
>
> Signed-off-by: Helge Deller <[email protected]>
> Signed-off-by: Richard Henderson <[email protected]>
Hi; Coverity points out a potential overflow in this code:
> -/* Purge (Insn/Data) TLB. This is explicitly page-based, and is
> - synchronous across all processors. */
> +/* Purge (Insn/Data) TLB. */
> static void ptlb_work(CPUState *cpu, run_on_cpu_data data)
> {
> CPUHPPAState *env = cpu_env(cpu);
> - target_ulong addr = (target_ulong) data.target_ptr;
> + vaddr start = data.target_ptr;
> + vaddr end;
>
> - hppa_flush_tlb_range(env, addr, addr);
> + /*
> + * PA2.0 allows a range of pages encoded into GR[b], which we have
> + * copied into the bottom bits of the otherwise page-aligned address.
> + * PA1.x will always provide zero here, for a single page flush.
> + */
> + end = start & 0xf;
> + start &= TARGET_PAGE_MASK;
> + end = TARGET_PAGE_SIZE << (2 * end);
Here 2 * end can be 30, but TARGET_PAGE_SIZE is only a 32-bit
type, so the shift might overflow. Cast TARGET_PAGE_SIZE to vaddr
before doing the shift? (CID 1523902)
> + end = start + end - 1;
> +
> + hppa_flush_tlb_range(env, start, end);
> }
thanks
-- PMM