On 15.07.2021 07:18, Penny Zheng wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1519,6 +1519,26 @@ static void free_heap_pages(
> spin_unlock(&heap_lock);
> }
>
> +#ifdef CONFIG_STATIC_MEMORY
> +/* Equivalent of free_heap_pages to free nr_mfns pages of static memory. */
> +void __init free_staticmem_pages(struct page_info *pg, unsigned long nr_mfns,
> + bool need_scrub)
> +{
> + mfn_t mfn = page_to_mfn(pg);
> + unsigned long i;
> +
> + for ( i = 0; i < nr_mfns; i++ )
> + {
> + mark_page_free(&pg[i], mfn_add(mfn, i));
> +
> + if ( need_scrub )
> + {
> + /* TODO: asynchronous scrubbing for pages of static memory. */
> + scrub_one_page(pg);
> + }
> + }
> +}
> +#endif
Btw, I think the lack of locking warrants extending the comment above
the function, to spell out what the implications are (in particular:
calls here need to happen early enough).
Jan