Minimal test.  Generates a list of 1000 prerequisites.

list := $(shell for n in $$(seq 1 1000); do echo file_$$n; done)
all: .EXTRA_PREREQS := $(list)
all:
echo "fin"

And tes making a copy of the table appears to fix it

On Sat, Apr 19, 2025 at 9:26 PM Dmitry Goncharov <dgoncha...@users.sf.net>
wrote:

> On Fri, Apr 18, 2025 at 11:12 PM Shim Manning <shimmann...@gmail.com>
> wrote:
> >
> > Seems that using .EXTRA_PREREQS can cause a segfault under the right
> conditions.
> ...
> > snap_deps
> > hash_map_arg   (loop happens here)
> > snap_file
> > expand_extra_prereqs
> > enter_file
> > hash_insert_at
> > hash_rehash  (loop no longer valid)
> >
> > causes a fairly consistent crash. I'm working on a minimal reproduction
> if it's necessary, but allowing the table to be rehashed while iterating
> seems like a problem.
>
> Thank you for your report.
> A test case to reproduce would be good. We'll need to add a test along
> with a fix.
> Given that you have a setup that crashes consistently, let me ask you
> to try out this fix and tell us if this helps.
>
> diff --git a/src/hash.c b/src/hash.c
> index 41e16895..23d6cd4d 100644
> --- a/src/hash.c
> +++ b/src/hash.c
> @@ -235,14 +235,16 @@ hash_map (struct hash_table *ht, hash_map_func_t map)
>  void
>  hash_map_arg (struct hash_table *ht, hash_map_arg_func_t map, void *arg)
>  {
> -  void **slot;
> -  void **end = &ht->ht_vec[ht->ht_size];
> -
> -  for (slot = ht->ht_vec; slot < end; slot++)
> -    {
> -      if (!HASH_VACANT (*slot))
> -        (*map) (*slot, arg);
> -    }
> +  /* Need to call 'map' for each item in 'ht'.
> +    Cannot iterate through 'ht', because 'map' can attempt to insert to
> 'ht'
> +    and cause 'ht' to be rehashed.  Copy all items from 'ht' to a
> temporary
> +    array 't' and then iterate through the temporary array.  */
> +  const struct file **t;
> +  t = alloca (sizeof (const struct file *) * (ht->ht_fill + 1));
> +  hash_dump (ht, (void **) t, 0);
> +  /* t is null terminated.  */
> +  for (; *t; ++t)
> +    (*map) (*t, arg);
>  }
>
>  /* Double the size of the hash table in the event of overflow... */
>
>
> regards, Dmitry
>

Reply via email to