On Fri, Oct 24, 2025 at 5:48 AM Yuhao Jiang <[email protected]> wrote:
>
> The code contains an out-of-bounds write vulnerability due to insufficient
> bounds validation. Negative pg_start values and integer overflow in
> pg_start+pg_count can bypass the existing bounds check.
>
> For example, pg_start=-1 with page_count=1 produces a sum of 0, passing
> the check `(pg_start + page_count) > num_entries`, but later writes to
> ptes[-1]. Similarly, pg_start=LONG_MAX-5 with pg_count=10 overflows,
> bypassing the check.

I guess the bounds checking in the AGP code for Alpha has some limitations
as to how it's implemented. I spent some time looking at how bounds checking
in alpha_core_agp_insert_memory() is done on other architectures and I see
some of the same issues in for, example parisc_agp_insert_memory() as well
as amd64_insert_memory(), which even has a /* FIXME: could wrap */ line at
its bounds checking code. I guess even agp_generic_insert_memory() has
similar limitations. I'm wondering if this is the case, because at some
point, it was determined that this will never become a real problem and no
need to mess with old code that isn't really that broken, or just that no one
ever got around to fixing it properly?

If it needs fixing, should we try to fix it for all arch's that have
similar limitations?

Magnus

Reply via email to