Hi,

On Tue, Mar 17, 2026 at 11:51 PM Masahiko Sawada <[email protected]> wrote:
>
> I find the current behavior of the autovacuum_parallel_workers storage
> parameter somewhat unintuitive for users. The documentation currently
> states:
>
> +     <para>
> +      Sets the maximum number of parallel autovacuum workers that can process
> +      indexes of this table.
> +      The default value is -1, which means no parallel index vacuuming for
> +      this table. If value is 0 then parallel degree will computed based on
> +      number of indexes.
> +      Note that the computed number of workers may not actually be available 
> at
> +      run time. If this occurs, the autovacuum will run with fewer workers
> +      than expected.
> +     </para>
>
> It is quite confusing that setting the value to 0 does not actually
> disable the parallel vacuum. In many other PostgreSQL parameters, 0
> typically means "off" or "no workers." I think that this parameter
> should behave as follows:
>
> -1: Use the value of autovacuum_max_parallel_workers (GUC) as the
> limit (fallback).
> >=0: Use the specified value as the limit, capped by 
> >autovacuum_max_parallel_workers. (Specifically, setting this to 0 would 
> >disable parallel vacuum for the table).
>

Actually we have several places in the code where "-1" means disabled and "0"
means choosing a parallel degree based on the number of indexes. Since this
is an inner logic, I agree that we should make our parameter more intuitive
to the user. But this will make the code a bit confusing.

> Currently, the patch implements parallel autovacuum as an "opt-in"
> style. That is, even after setting the GUC to >0, users must manually
> set the storage parameter for each table. This assumes that users
> already know exactly which tables need parallel vacuum.
>
> However, I believe it would be more intuitive to let the system decide
> which tables are eligible for parallel vacuum based on index size and
> count (via min_parallel_index_scan_size, etc.), rather than forcing
> manual per-table configuration. Therefore, I'm thinking we might want
> to make it "opt-out" style by default instead:
>
> - Set the default value of the storage parameter to -1 (i.e., fallback to 
> GUC).
> - the default value of the GUC autovacuum_max_parallel_workers at 0.
>
> With this configuration:
>
> - Parallel autovacuum is disabled by default.
> - Users can enable it globally by simply setting the GUC to >0.
> - Users can still disable it for specific tables by setting the
> storage parameter to 0.
>
> What do you think?

I'm afraid that I can't agree with you here. As I wrote above [1], the
parallel a/v feature will be useful when a user has a few huge tables with
a big amount of indexes. Only these tables require parallel processing and a
user knows about it.

If we implement the feature as you suggested, then after setting the
av_max_parallel_workers to N > 0, the user will have to manually disable
processing for all tables except the largest ones. This will need to be done
to ensure that parallel workers are launched specifically to process the
largest tables and not wasting on the processing of little ones.

I.e. I'm proposing a design that will require manual actions to *enable*
parallel a/v for several large tables rather than *disable* it for all of
the rest tables in the cluster. I'm sure that's what users want.

Allowing the system to decide which tables to process in parallel is a good
way from a design perspective. But I'm thinking of the following example :
Imagine that we have a threshold, when exceeded, parallel a/v is used.
Several a/v workers encounter tables which exceed this threshold by 1_000 and
each of these workers decides to launch a few parallel workers. Another a/v
worker encounters a table which is beyond this threshold by 1_000_000 and
tries to launch N parallel workers, but facing the max_parallel_workers
shortage. Thus, processing of this table will take a very long time to
complete due to lack of resources. The only way for users to avoid it is to
disable parallel a/v for all tables, which exceeds the threshold and are not
of particular interest.

I cannot imagine how our heuristics can handle such situations. IMHO the
situation will come down to the fact that users will manually disable
parallel a/v for a big amount of tables. I guess it can be pretty frustrating.

What do you think?

>
> +{ name => 'autovacuum_max_parallel_workers', type => 'int', context
> => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
> +  short_desc => 'Maximum number of parallel workers that a single
> autovacuum worker can take from bgworkers pool.',
> +  variable => 'autovacuum_max_parallel_workers',
> +  boot_val => '2',
> +  min => '0',
> +  max => 'MAX_BACKENDS',
> +},
>
> How about rephrasing the short description to "Maximum number of
> parallel processes per autovacuum operation."?

I'm not sure if this phrase will be understandable to the user.
I don't see any places where we would define the "autovacuum operation"
concept, so I suppose it could be ambiguous. What about "Maximum number of
parallel processes per autovacuuming of one table"?

>
> The maximum value should be MAX_PARALLEL_WORKER_LIMIT.
>

Sure!

>
> I think that it's better to rename PVWorkersStats and PVWorkersUsage
> to PVWorkerStats and PVWorkerUsage (making Worker singular).
>
> I've attached the patch for minor fixes including the above comments.
>

I agree with all proposed fixes. Thank you!

>
> +               if (AmAutoVacuumWorkerProcess())
> +                       elog(DEBUG2,
> +                                ngettext("autovacuum worker: finished
> parallel index processing with %d parallel worker",
> +                                                 "autovacuum worker:
> finished parallel index processing with %d parallel workers",
> +                                                 nworkers),
> +                                nworkers);
>
> Now that having planned and launched logs in autovacuum logs is
> straightforward, let's use these logs in the tests instead and make it
> the first patch. We can apply it independently.
>

OK, I agree.

> We check only the server logs throughout the new tap tests. I think we
> should also confirm that the autovacuum successfully completes. I've
> attached the proposed change to the tap tests.
>

I agree with proposed changes. BTW, don't we need to reduce the strings
length to 80 characters in the tests? In some tests, this rule is followed,
and in some it is not.

--
Thank you very much for the review and proposed patches!
Please, see an updated set of patches. Note that the "logging for autovacuum"
is considered as the first patch now.

[1] 
https://www.postgresql.org/message-id/CAJDiXghaazbrQMZZS08d9Ffh2y4w05TgH9dpBhqChv1qNTp%2BxA%40mail.gmail.com

--
Best regards,
Daniil Davydov
From a13a5b269ac51bfba66354123a8be8b0ef5cf64a Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 03:23:38 +0700
Subject: [PATCH v29 5/5] Documentation for parallel autovacuum

---
 doc/src/sgml/config.sgml           | 18 ++++++++++++++++++
 doc/src/sgml/maintenance.sgml      | 12 ++++++++++++
 doc/src/sgml/ref/create_table.sgml | 21 +++++++++++++++++++++
 3 files changed, 51 insertions(+)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 8cdd826fbd3..7741796c6b0 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2918,6 +2918,7 @@ include_dir 'conf.d'
         <para>
          When changing this value, consider also adjusting
          <xref linkend="guc-max-parallel-workers"/>,
+         <xref linkend="guc-autovacuum-max-parallel-workers"/>,
          <xref linkend="guc-max-parallel-maintenance-workers"/>, and
          <xref linkend="guc-max-parallel-workers-per-gather"/>.
         </para>
@@ -9395,6 +9396,23 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+        <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+        <indexterm>
+         <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+         <secondary>configuration parameter</secondary>
+        </indexterm>
+        </term>
+        <listitem>
+         <para>
+          Sets the maximum number of parallel autovacuum workers that
+          can be used for parallel index vacuuming at one time by a single
+          autovacuum worker. Is capped by <xref linkend="guc-max-parallel-workers"/>.
+          The default is 2.
+         </para>
+        </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
 
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 7c958b06273..f2a280db569 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -926,6 +926,18 @@ HINT:  Execute a database-wide VACUUM in that database.
     autovacuum workers' activity.
    </para>
 
+   <para>
+    If an autovacuum worker process comes across a table with the enabled
+    <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+    it will launch parallel workers in order to vacuum indexes of this table
+    in a parallel mode. Parallel workers are taken from the pool of processes
+    established by <xref linkend="guc-max-worker-processes"/>, limited by
+    <xref linkend="guc-max-parallel-workers"/>.
+    The number of parallel workers that can be taken from pool by a single
+    autovacuum worker is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+    configuration parameter.
+   </para>
+
    <para>
     If several large tables all become eligible for vacuuming in a short
     amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 982532fe725..e367310a571 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1718,6 +1718,27 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+  <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+    <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+    <indexterm>
+     <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+    </indexterm>
+    </term>
+    <listitem>
+     <para>
+      Sets the maximum number of parallel autovacuum workers that can process
+      indexes of this table.
+      The default value is 0, which means no parallel index vacuuming for
+      this table. If value is -1 then parallel degree will computed based on
+      number of indexes and limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+      parameter.
+      Note that the computed number of workers may not actually be available at
+      run time. If this occurs, the autovacuum will run with fewer workers
+      than expected.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
     <term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
     <indexterm>
-- 
2.43.0

From cbe0ea08d700f141a50717283b287457961f3eb3 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 15 Jan 2026 23:15:48 +0700
Subject: [PATCH v29 3/5] Cost based parameters propagation for parallel
 autovacuum

---
 src/backend/commands/vacuum.c         |  21 +++-
 src/backend/commands/vacuumparallel.c | 163 ++++++++++++++++++++++++++
 src/backend/postmaster/autovacuum.c   |   2 +-
 src/include/commands/vacuum.h         |   2 +
 src/tools/pgindent/typedefs.list      |   1 +
 5 files changed, 186 insertions(+), 3 deletions(-)

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index bce3a2daa24..1b5ba3ce1ef 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2435,8 +2435,19 @@ vacuum_delay_point(bool is_analyze)
 	/* Always check for interrupts */
 	CHECK_FOR_INTERRUPTS();
 
-	if (InterruptPending ||
-		(!VacuumCostActive && !ConfigReloadPending))
+	if (InterruptPending)
+		return;
+
+	if (IsParallelWorker())
+	{
+		/*
+		 * Update cost-based vacuum delay parameters for a parallel autovacuum
+		 * worker if any changes are detected.
+		 */
+		parallel_vacuum_update_shared_delay_params();
+	}
+
+	if (!VacuumCostActive && !ConfigReloadPending)
 		return;
 
 	/*
@@ -2450,6 +2461,12 @@ vacuum_delay_point(bool is_analyze)
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
 		VacuumUpdateCosts();
+
+		/*
+		 * Propagate cost-based vacuum delay parameters to shared memory if
+		 * any of them have changed during the config reload.
+		 */
+		parallel_vacuum_propagate_shared_delay_params();
 	}
 
 	/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index b7ffd854009..98aeb66eec4 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -18,6 +18,13 @@
  * the parallel context is re-initialized so that the same DSM can be used for
  * multiple passes of index bulk-deletion and index cleanup.
  *
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
  * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
@@ -53,6 +60,31 @@
 #define PARALLEL_VACUUM_KEY_WAL_USAGE		4
 #define PARALLEL_VACUUM_KEY_INDEX_STATS		5
 
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+	/*
+	 * The generation counter is incremented by the leader process each time
+	 * it updates the shared cost-based vacuum delay parameters. Paralell
+	 * vacuum workers compares it with their local generation,
+	 * shared_params_generation_local, to detect whether they need to refresh
+	 * their local parameters.
+	 */
+	pg_atomic_uint32 generation;
+
+	slock_t		mutex;			/* protects all fields below */
+
+	/* Parameters to share with parallel workers */
+	double		cost_delay;
+	int			cost_limit;
+	int			cost_page_dirty;
+	int			cost_page_hit;
+	int			cost_page_miss;
+} PVSharedCostParams;
+
 /*
  * Shared information among parallel workers.  So this is allocated in the DSM
  * segment.
@@ -122,6 +154,18 @@ typedef struct PVShared
 
 	/* Statistics of shared dead items */
 	VacDeadItemsInfo dead_items_info;
+
+	/*
+	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
+	 * running parallel maintenence VACUUM.
+	 */
+	bool		is_autovacuum;
+
+	/*
+	 * Struct for syncing cost-based vacuum delay parameters between
+	 * supportive parallel autovacuum workers with leader worker.
+	 */
+	PVSharedCostParams cost_params;
 } PVShared;
 
 /* Status used during parallel index vacuum or cleanup */
@@ -224,6 +268,11 @@ struct ParallelVacuumState
 	PVIndVacStatus status;
 };
 
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/* See comments in the PVSharedCostParams for the details */
+static uint32 shared_params_generation_local = 0;
+
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
@@ -235,6 +284,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation
 static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
 
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
@@ -395,6 +445,21 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
 
+	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+	/*
+	 * Initialize shared cost-based vacuum delay parameters if it's for
+	 * autovacuum.
+	 */
+	if (shared->is_autovacuum)
+	{
+		parallel_vacuum_set_cost_parameters(&shared->cost_params);
+		pg_atomic_init_u32(&shared->cost_params.generation, 0);
+		SpinLockInit(&shared->cost_params.mutex);
+
+		pv_shared_cost_params = &(shared->cost_params);
+	}
+
 	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
 	pvs->shared = shared;
 
@@ -460,6 +525,9 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
 	DestroyParallelContext(pvs->pcxt);
 	ExitParallelMode();
 
+	if (AmAutoVacuumWorkerProcess())
+		pv_shared_cost_params = NULL;
+
 	pfree(pvs->will_parallel_vacuum);
 	pfree(pvs);
 }
@@ -537,6 +605,95 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats);
 }
 
+/*
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+	params->cost_delay = vacuum_cost_delay;
+	params->cost_limit = vacuum_cost_limit;
+	params->cost_page_dirty = VacuumCostPageDirty;
+	params->cost_page_hit = VacuumCostPageHit;
+	params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
+ *
+ * For non-autovacuum parallel worker this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+	uint32		params_generation;
+
+	Assert(IsParallelWorker());
+
+	/* Quick return if the wokrer is not running for the autovacuum */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+	Assert(shared_params_generation_local <= params_generation);
+
+	/* Return if parameters had not changed in the leader */
+	if (params_generation == shared_params_generation_local)
+		return;
+
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	VacuumCostDelay = pv_shared_cost_params->cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	VacuumUpdateCosts();
+
+	shared_params_generation_local = params_generation;
+}
+
+/*
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/*
+	 * Quick return if the leader process is not sharing the delay parameters.
+	 */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	/*
+	 * Check if any delay parameters has changed. We can read them without
+	 * locks as only the leader can modify them.
+	 */
+	if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+		vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+		VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+		VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+		VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
+		return;
+
+	/* Update the shared delay parameters */
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	/*
+	 * Increment the generation of the parameters, i.e. let parallel workers
+	 * know that they should re-read shared cost params.
+	 */
+	pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
 /*
  * Compute the number of parallel worker processes to request.  Both index
  * vacuum and index cleanup can be executed with parallel workers.
@@ -1078,6 +1235,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	VacuumSharedCostBalance = &(shared->cost_balance);
 	VacuumActiveNWorkers = &(shared->active_nworkers);
 
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = &(shared->cost_params);
+
 	/* Set parallel vacuum state */
 	pvs.indrels = indrels;
 	pvs.nindexes = nindexes;
@@ -1127,6 +1287,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	vac_close_indexes(nindexes, indrels, RowExclusiveLock);
 	table_close(rel, ShareUpdateExclusiveLock);
 	FreeAccessStrategy(pvs.bstrategy);
+
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = NULL;
 }
 
 /*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index ff57d8fca2a..adccfa06775 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1659,7 +1659,7 @@ VacuumUpdateCosts(void)
 	}
 	else
 	{
-		/* Must be explicit VACUUM or ANALYZE */
+		/* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
 		vacuum_cost_delay = VacuumCostDelay;
 		vacuum_cost_limit = VacuumCostLimit;
 	}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 953a506181e..cc154737115 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -423,6 +423,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												int num_index_scans,
 												bool estimated_count,
 												PVWorkerStats *wstats);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 4c230ee38ca..ca99953df27 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2088,6 +2088,7 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVSharedCostParams
 PVWorkerUsage
 PVWorkerStats
 PX_Alias
-- 
2.43.0

From 84b220f99866343efb5d5cece5b0392153043f1e Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 02:18:09 +0700
Subject: [PATCH v29 2/5] Parallel autovacuum

---
 src/backend/access/common/reloptions.c        | 11 ++++++++++
 src/backend/commands/vacuumparallel.c         | 20 +++++++++++++------
 src/backend/postmaster/autovacuum.c           | 14 +++++++++++--
 src/backend/utils/init/globals.c              |  1 +
 src/backend/utils/misc/guc.c                  |  8 ++++++--
 src/backend/utils/misc/guc_parameters.dat     |  8 ++++++++
 src/backend/utils/misc/postgresql.conf.sample |  1 +
 src/bin/psql/tab-complete.in.c                |  1 +
 src/include/miscadmin.h                       |  1 +
 src/include/utils/rel.h                       |  9 +++++++++
 10 files changed, 64 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 237ab8d0ed9..055585c38f3 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -235,6 +235,15 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		0, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1968,6 +1977,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 77834b96a21..b7ffd854009 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  comments below, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -374,8 +376,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -555,12 +558,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_workers;
+
+	max_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_workers == 0)
 		return 0;
 
 	/*
@@ -599,8 +607,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_workers);
 
 	return parallel_workers;
 }
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 219673db930..ff57d8fca2a 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2798,6 +2798,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			multixact_freeze_table_age;
 		int			log_vacuum_min_duration;
 		int			log_analyze_min_duration;
+		int			nparallel_workers = -1; /* disabled by default */
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2858,8 +2859,16 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
+		/* Decide whether we need to process indexes of table in parallel. */
+		if (avopts)
+		{
+			if (avopts->autovacuum_parallel_workers > 0)
+				nparallel_workers = avopts->autovacuum_parallel_workers;
+			else if (avopts->autovacuum_parallel_workers == -1)
+				nparallel_workers = 0;
+		}
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -2868,6 +2877,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		tab->at_params.log_vacuum_min_duration = log_vacuum_min_duration;
 		tab->at_params.log_analyze_min_duration = log_analyze_min_duration;
 		tab->at_params.toast_parent = InvalidOid;
+		tab->at_params.nworkers = nparallel_workers;
 
 		/*
 		 * Later, in vacuum_rel(), we check reloptions for any
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..8265a82b639 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int			autovacuum_max_parallel_workers = 2;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index d77502838c4..534e58a398c 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3326,9 +3326,13 @@ set_config_with_handle(const char *name, config_handle *handle,
 	 *
 	 * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
 	 *
-	 * Other changes might need to affect other workers, so forbid them.
+	 * Other changes might need to affect other workers, so forbid them. Note,
+	 * that parallel autovacuum leader is an exception, because only
+	 * cost-based delays need to be affected also to parallel autovacuum
+	 * workers, and we will handle it elsewhere if appropriate.
 	 */
-	if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+	if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+		action != GUC_ACTION_SAVE &&
 		(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index a5a0edf2534..9bd155e99f6 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -154,6 +154,14 @@
   max => '2000000000',
 },
 
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+  short_desc => 'Maximum number of parallel processes per autovacuuming of one table.',
+  variable => 'autovacuum_max_parallel_workers',
+  boot_val => '2',
+  min => '0',
+  max => 'MAX_PARALLEL_WORKER_LIMIT',
+},
+
 { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
   short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
   variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index e686d88afc4..5e1c62d616c 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -710,6 +710,7 @@
 #autovacuum_worker_slots = 16           # autovacuum worker slots to allocate
                                         # (change requires restart)
 #autovacuum_max_workers = 3             # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 2    # limited by max_parallel_workers
 #autovacuum_naptime = 1min              # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50       # min number of row updates before
                                         # vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 5bdbf1530a2..29171efbc1b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1432,6 +1432,7 @@ static const char *const table_storage_parameters[] = {
 	"autovacuum_multixact_freeze_max_age",
 	"autovacuum_multixact_freeze_min_age",
 	"autovacuum_multixact_freeze_table_age",
+	"autovacuum_parallel_workers",
 	"autovacuum_vacuum_cost_delay",
 	"autovacuum_vacuum_cost_limit",
 	"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index f16f35659b9..00190c67ecf 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..1981954008e 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,15 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+
+	/*
+	 * Target number of parallel autovacuum workers. 0 by default disables
+	 * parallel vacuum during autovacuum. -1 means choose the parallel degree
+	 * based on the number of indexes (the autovacuum_max_parallel_workers
+	 * parameter will be used as a limit).
+	 */
+	int			autovacuum_parallel_workers;
+
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

From 1a7a9afce5ed6bc6a09f27b8b45dbe9a67b08978 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 02:50:23 +0700
Subject: [PATCH v29 4/5] Tests for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c          |   9 +
 src/backend/commands/vacuumparallel.c         |  18 ++
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/test_autovacuum/.gitignore   |   2 +
 src/test/modules/test_autovacuum/Makefile     |  20 ++
 src/test/modules/test_autovacuum/meson.build  |  15 ++
 .../t/001_parallel_autovacuum.pl              | 180 ++++++++++++++++++
 8 files changed, 246 insertions(+)
 create mode 100644 src/test/modules/test_autovacuum/.gitignore
 create mode 100644 src/test/modules/test_autovacuum/Makefile
 create mode 100644 src/test/modules/test_autovacuum/meson.build
 create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index c57432670e7..8d2980f3ef0 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -152,6 +152,7 @@
 #include "storage/latch.h"
 #include "storage/lmgr.h"
 #include "storage/read_stream.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_rusage.h"
 #include "utils/timestamp.h"
@@ -873,6 +874,14 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	lazy_check_wraparound_failsafe(vacrel);
 	dead_items_alloc(vacrel, params.nworkers);
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * Trigger injection point, if parallel autovacuum is about to be started.
+	 */
+	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
 	/*
 	 * Call lazy_scan_heap to perform all required heap pruning, index
 	 * vacuuming, and heap vacuuming (plus related processing)
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 98aeb66eec4..62b6f50b538 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -46,6 +46,7 @@
 #include "storage/bufmgr.h"
 #include "storage/proc.h"
 #include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 
@@ -653,6 +654,14 @@ parallel_vacuum_update_shared_delay_params(void)
 	VacuumUpdateCosts();
 
 	shared_params_generation_local = params_generation;
+
+	elog(DEBUG2,
+		 "parallel autovacuum worker updated cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
 }
 
 /*
@@ -895,6 +904,15 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 							pvs->pcxt->nworkers_launched, nworkers)));
 	}
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * This injection point is used to wait until parallel autovacuum workers
+	 * finishes their part of index processing.
+	 */
+	if (nworkers > 0)
+		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
+
 	/* Vacuum the indexes that can be processed by only leader process */
 	parallel_vacuum_process_unsafe_indexes(pvs);
 
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 4ac5c84db43..01fe0041c97 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
 		  plsample \
 		  spgist_name_ops \
 		  test_aio \
+		  test_autovacuum \
 		  test_binaryheap \
 		  test_bitmapset \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index e2b3eef4136..9dcdc68bc87 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
 subdir('test_aio')
+subdir('test_autovacuum')
 subdir('test_binaryheap')
 subdir('test_bitmapset')
 subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..188ec9f96a2
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,20 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..86e392bc0de
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,15 @@
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
+
+tests += {
+  'name': 'test_autovacuum',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'env': {
+       'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+    },
+    'tests': [
+      't/001_parallel_autovacuum.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
new file mode 100644
index 00000000000..2f34999d25e
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -0,0 +1,180 @@
+# Test parallel autovacuum behavior
+
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+	plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it. Returns the current autovacuum_count of
+# the table tset_autovac.
+sub prepare_for_next_test
+{
+	my ($node, $test_number) = @_;
+
+	$node->safe_psql('postgres', qq{
+		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+		UPDATE test_autovac SET col_1 = $test_number;
+	});
+
+	my $count = $node->safe_psql('postgres', qq{
+		SELECT autovacuum_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+
+	return $count;
+}
+
+# Wait for the table to be vacuumed by an autovacuum worker.
+sub wait_for_autovacuum_complete
+{
+	my ($node, $old_count) = @_;
+
+	$node->poll_query_until('postgres', qq{
+		SELECT autovacuum_count > $old_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+}
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('main');
+$node->init;
+
+# Configure postgres, so it can launch parallel autovacuum workers, log all
+# information we are interested in and autovacuum works frequently
+$node->append_conf('postgresql.conf', qq{
+	max_worker_processes = 20
+	max_parallel_workers = 20
+	autovacuum_max_parallel_workers = 4
+	log_min_messages = debug2
+	autovacuum_naptime = '1s'
+	min_parallel_index_scan_size = 0
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+	plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql('postgres', qq{
+	CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 3;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql('postgres', qq{
+	CREATE TABLE test_autovac (
+		id SERIAL PRIMARY KEY,
+		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
+	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+			log_autovacuum_min_duration = 0);
+
+	INSERT INTO test_autovac
+	SELECT
+		g AS col1,
+		g + 1 AS col2,
+		g + 2 AS col3,
+		g + 3 AS col4
+	FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql('postgres', qq{
+	DO \$\$
+	DECLARE
+		i INTEGER;
+	BEGIN
+		FOR i IN 1..$indexes_num LOOP
+			EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+		END LOOP;
+	END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can do it.
+
+my $av_count = prepare_for_next_test($node, 1);
+my $log_offset = -s $node->logfile;
+
+$node->safe_psql('postgres', qq{
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+wait_for_autovacuum_complete($node, $av_count);
+ok($node->log_contains(qr/parallel workers: index vacuum: 2 planned, 2 launched in total/,
+					   $log_offset));
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to the parallel workers.
+
+$av_count = prepare_for_next_test($node, 2);
+$log_offset = -s $node->logfile;
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-start-parallel-vacuum'
+);
+
+# Update the shared cost-based delay parameters.
+$node->safe_psql('postgres', qq{
+	ALTER SYSTEM SET vacuum_cost_limit = 500;
+	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+	ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+	ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+	SELECT pg_reload_conf();
+});
+
+# Resume the leader process to update the shared parameters during heap scan (i.e.
+# vacuum_delay_point() is called) and launch a parallel vacuum worker, but it stops
+# before vacuuming indexes due to the injection point.
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing
+$node->wait_for_log(qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=2, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+					$log_offset);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+wait_for_autovacuum_complete($node, $av_count);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+$node->stop;
+done_testing();
-- 
2.43.0

From 7029863373bbb61607ebe5b7070bf2cd70de3091 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Mon, 16 Mar 2026 19:01:05 +0700
Subject: [PATCH v29 1/5] Logging for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c  | 31 +++++++++++++++++++++++++--
 src/backend/commands/vacuumparallel.c | 23 ++++++++++++++------
 src/include/commands/vacuum.h         | 28 ++++++++++++++++++++++--
 src/tools/pgindent/typedefs.list      |  2 ++
 4 files changed, 74 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 82c5b28e0ad..c57432670e7 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -343,6 +343,13 @@ typedef struct LVRelState
 	int			num_index_scans;
 	int			num_dead_items_resets;
 	Size		total_dead_items_bytes;
+
+	/*
+	 * Total number of planned and actually launched parallel workers for
+	 * index vacuuming and index cleanup.
+	 */
+	PVWorkerUsage worker_usage;
+
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
 	int64		tuples_frozen;	/* # newly frozen */
@@ -781,6 +788,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	vacrel->new_all_visible_all_frozen_pages = 0;
 	vacrel->new_all_frozen_pages = 0;
 
+	vacrel->worker_usage.vacuum.nlaunched = 0;
+	vacrel->worker_usage.vacuum.nplanned = 0;
+	vacrel->worker_usage.cleanup.nlaunched = 0;
+	vacrel->worker_usage.cleanup.nplanned = 0;
+
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
 	 * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze.  Then determine
@@ -1123,6 +1135,19 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 orig_rel_pages == 0 ? 100.0 :
 							 100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
 							 vacrel->lpdead_items);
+
+			if (vacrel->worker_usage.vacuum.nplanned > 0)
+				appendStringInfo(&buf,
+								 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
+								 vacrel->worker_usage.vacuum.nplanned,
+								 vacrel->worker_usage.vacuum.nlaunched);
+
+			if (vacrel->worker_usage.cleanup.nplanned > 0)
+				appendStringInfo(&buf,
+								 _("parallel workers: index cleanup: %d planned, %d launched\n"),
+								 vacrel->worker_usage.cleanup.nplanned,
+								 vacrel->worker_usage.cleanup.nlaunched);
+
 			for (int i = 0; i < vacrel->nindexes; i++)
 			{
 				IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2669,7 +2694,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	{
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
-											vacrel->num_index_scans);
+											vacrel->num_index_scans,
+											&(vacrel->worker_usage.vacuum));
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3103,7 +3129,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
-											estimated_count);
+											estimated_count,
+											&(vacrel->worker_usage.cleanup));
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 279108ca89f..77834b96a21 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -225,7 +225,7 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum);
+												bool vacuum, PVWorkerStats *wstats);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -499,7 +499,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans)
+									int num_index_scans, PVWorkerStats *wstats)
 {
 	Assert(!IsParallelWorker());
 
@@ -510,7 +510,7 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true, wstats);
 }
 
 /*
@@ -518,7 +518,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
  */
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, bool estimated_count)
+									int num_index_scans, bool estimated_count,
+									PVWorkerStats *wstats)
 {
 	Assert(!IsParallelWorker());
 
@@ -530,7 +531,7 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats);
 }
 
 /*
@@ -607,10 +608,12 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 /*
  * Perform index vacuum or index cleanup with parallel workers.  This function
  * must be used by the parallel vacuum leader process.
+ *
+ * If wstats is not NULL, the parallel worker statistics are updated.
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum)
+									bool vacuum, PVWorkerStats *wstats)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -647,6 +650,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/* Update the statistics, if we asked to */
+	if (wstats != NULL && nworkers > 0)
+		wstats->nplanned += nworkers;
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -703,6 +710,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 			/* Enable shared cost balance for leader backend */
 			VacuumSharedCostBalance = &(pvs->shared->cost_balance);
 			VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+			/* Update the statistics, if we asked to */
+			if (wstats != NULL)
+				wstats->nlaunched += pvs->pcxt->nworkers_launched;
 		}
 
 		if (vacuum)
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index e885a4b9c77..953a506181e 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -300,6 +300,28 @@ typedef struct VacDeadItemsInfo
 	int64		num_items;		/* current # of entries */
 } VacDeadItemsInfo;
 
+/*
+ * Statistics for parallel vacuum workers (planned vs. actual)
+ */
+typedef struct PVWorkerStats
+{
+	/* Number of parallel workers planned to launch */
+	int			nplanned;
+
+	/* Number of parallel workers that were successfully launched */
+	int			nlaunched;
+} PVWorkerStats;
+
+/*
+ * PVWorkerUsage stores information about total number of launched and
+ * planned workers during parallel vacuum (both for index vacuum and cleanup).
+ */
+typedef struct PVWorkerUsage
+{
+	PVWorkerStats vacuum;
+	PVWorkerStats cleanup;
+} PVWorkerUsage;
+
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
 extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -394,11 +416,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
 extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
-												int num_index_scans);
+												int num_index_scans,
+												PVWorkerStats *wstats);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												bool estimated_count);
+												bool estimated_count,
+												PVWorkerStats *wstats);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 52f8603a7be..4c230ee38ca 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2088,6 +2088,8 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVWorkerUsage
+PVWorkerStats
 PX_Alias
 PX_Cipher
 PX_Combo
-- 
2.43.0

From fae3dd1f6bd97acb626a85120bb09e2b18b76f98 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Wed, 18 Mar 2026 15:03:31 +0700
Subject: [PATCH 4/9] fixup for parallel autovacuum core

---
 src/backend/access/common/reloptions.c    |  2 +-
 src/backend/postmaster/autovacuum.c       | 12 +++++++++---
 src/backend/utils/misc/guc_parameters.dat |  4 ++--
 src/include/utils/rel.h                   |  7 ++++---
 4 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 9459a010cc3..055585c38f3 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -242,7 +242,7 @@ static relopt_int intRelOpts[] =
 			RELOPT_KIND_HEAP,
 			ShareUpdateExclusiveLock
 		},
-		-1, -1, 1024
+		0, -1, 1024
 	},
 	{
 		{
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index f153d0343c8..ff57d8fca2a 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2798,6 +2798,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			multixact_freeze_table_age;
 		int			log_vacuum_min_duration;
 		int			log_analyze_min_duration;
+		int			nparallel_workers = -1; /* disabled by default */
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2860,9 +2861,13 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
 
 		/* Decide whether we need to process indexes of table in parallel. */
-		tab->at_params.nworkers = avopts
-			? avopts->autovacuum_parallel_workers
-			: -1;
+		if (avopts)
+		{
+			if (avopts->autovacuum_parallel_workers > 0)
+				nparallel_workers = avopts->autovacuum_parallel_workers;
+			else if (avopts->autovacuum_parallel_workers == -1)
+				nparallel_workers = 0;
+		}
 
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
@@ -2872,6 +2877,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		tab->at_params.log_vacuum_min_duration = log_vacuum_min_duration;
 		tab->at_params.log_analyze_min_duration = log_analyze_min_duration;
 		tab->at_params.toast_parent = InvalidOid;
+		tab->at_params.nworkers = nparallel_workers;
 
 		/*
 		 * Later, in vacuum_rel(), we check reloptions for any
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 12393c1214b..9bd155e99f6 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -155,11 +155,11 @@
 },
 
 { name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
-  short_desc => 'Maximum number of parallel workers that a single autovacuum worker can take from bgworkers pool.',
+  short_desc => 'Maximum number of parallel processes per autovacuuming of one table.',
   variable => 'autovacuum_max_parallel_workers',
   boot_val => '2',
   min => '0',
-  max => 'MAX_BACKENDS',
+  max => 'MAX_PARALLEL_WORKER_LIMIT',
 },
 
 { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 11dd3aebc6c..1981954008e 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -313,9 +313,10 @@ typedef struct AutoVacOpts
 	bool		enabled;
 
 	/*
-	 * Target number of parallel autovacuum workers. -1 by default disables
-	 * parallel vacuum during autovacuum. 0 means choose the parallel degree
-	 * based on the number of indexes.
+	 * Target number of parallel autovacuum workers. 0 by default disables
+	 * parallel vacuum during autovacuum. -1 means choose the parallel degree
+	 * based on the number of indexes (the autovacuum_max_parallel_workers
+	 * parameter will be used as a limit).
 	 */
 	int			autovacuum_parallel_workers;
 
-- 
2.43.0

From 7de90ab01a711a54510c1f41934555ed2fb4c9e4 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Wed, 18 Mar 2026 15:41:55 +0700
Subject: [PATCH 9/9] documentation fixes

---
 doc/src/sgml/ref/create_table.sgml | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 4894de021cd..e367310a571 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1728,9 +1728,10 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
      <para>
       Sets the maximum number of parallel autovacuum workers that can process
       indexes of this table.
-      The default value is -1, which means no parallel index vacuuming for
-      this table. If value is 0 then parallel degree will computed based on
-      number of indexes.
+      The default value is 0, which means no parallel index vacuuming for
+      this table. If value is -1 then parallel degree will computed based on
+      number of indexes and limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+      parameter.
       Note that the computed number of workers may not actually be available at
       run time. If this occurs, the autovacuum will run with fewer workers
       than expected.
-- 
2.43.0

From 1571b7a2eb7dacbd80c697aab937e398c0f70daf Mon Sep 17 00:00:00 2001
From: Masahiko Sawada <[email protected]>
Date: Mon, 16 Mar 2026 15:09:26 -0700
Subject: [PATCH 2/9] fixup for logging.

---
 src/backend/access/heap/vacuumlazy.c  | 35 +++++++++++++--------------
 src/backend/commands/vacuumparallel.c | 21 +++++++---------
 src/include/commands/vacuum.h         | 26 ++++++++++----------
 src/tools/pgindent/typedefs.list      |  4 +--
 4 files changed, 41 insertions(+), 45 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index cccaee5b620..c57432670e7 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -346,9 +346,9 @@ typedef struct LVRelState
 
 	/*
 	 * Total number of planned and actually launched parallel workers for
-	 * index scans.
+	 * index vacuuming and index cleanup.
 	 */
-	PVWorkersUsage workers_usage;
+	PVWorkerUsage worker_usage;
 
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
@@ -788,10 +788,10 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	vacrel->new_all_visible_all_frozen_pages = 0;
 	vacrel->new_all_frozen_pages = 0;
 
-	vacrel->workers_usage.vacuum.nlaunched = 0;
-	vacrel->workers_usage.vacuum.nplanned = 0;
-	vacrel->workers_usage.cleanup.nlaunched = 0;
-	vacrel->workers_usage.cleanup.nplanned = 0;
+	vacrel->worker_usage.vacuum.nlaunched = 0;
+	vacrel->worker_usage.vacuum.nplanned = 0;
+	vacrel->worker_usage.cleanup.nlaunched = 0;
+	vacrel->worker_usage.cleanup.nplanned = 0;
 
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
@@ -1135,20 +1135,19 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 orig_rel_pages == 0 ? 100.0 :
 							 100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
 							 vacrel->lpdead_items);
-			if (vacrel->workers_usage.vacuum.nplanned > 0)
-			{
+
+			if (vacrel->worker_usage.vacuum.nplanned > 0)
 				appendStringInfo(&buf,
 								 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
-								 vacrel->workers_usage.vacuum.nplanned,
-								 vacrel->workers_usage.vacuum.nlaunched);
-			}
-			if (vacrel->workers_usage.cleanup.nplanned > 0)
-			{
+								 vacrel->worker_usage.vacuum.nplanned,
+								 vacrel->worker_usage.vacuum.nlaunched);
+
+			if (vacrel->worker_usage.cleanup.nplanned > 0)
 				appendStringInfo(&buf,
 								 _("parallel workers: index cleanup: %d planned, %d launched\n"),
-								 vacrel->workers_usage.cleanup.nplanned,
-								 vacrel->workers_usage.cleanup.nlaunched);
-			}
+								 vacrel->worker_usage.cleanup.nplanned,
+								 vacrel->worker_usage.cleanup.nlaunched);
+
 			for (int i = 0; i < vacrel->nindexes; i++)
 			{
 				IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2696,7 +2695,7 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
 											vacrel->num_index_scans,
-											&vacrel->workers_usage);
+											&(vacrel->worker_usage.vacuum));
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3131,7 +3130,7 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
 											estimated_count,
-											&vacrel->workers_usage);
+											&(vacrel->worker_usage.cleanup));
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 692729efd5e..77834b96a21 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -225,7 +225,7 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum, PVWorkersStats *wstats);
+												bool vacuum, PVWorkerStats *wstats);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -499,7 +499,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, PVWorkersUsage *wusage)
+									int num_index_scans, PVWorkerStats *wstats)
 {
 	Assert(!IsParallelWorker());
 
@@ -510,8 +510,7 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true,
-										&wusage->vacuum);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true, wstats);
 }
 
 /*
@@ -520,7 +519,7 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
 									int num_index_scans, bool estimated_count,
-									PVWorkersUsage *wusage)
+									PVWorkerStats *wstats)
 {
 	Assert(!IsParallelWorker());
 
@@ -532,8 +531,7 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false,
-										&wusage->cleanup);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats);
 }
 
 /*
@@ -611,12 +609,11 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
  * Perform index vacuum or index cleanup with parallel workers.  This function
  * must be used by the parallel vacuum leader process.
  *
- * If wstats is not NULL, the statistics it stores will be updated according
- * to what happens during function execution.
+ * If wstats is not NULL, the parallel worker statistics are updated.
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum, PVWorkersStats *wstats)
+									bool vacuum, PVWorkerStats *wstats)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -653,7 +650,7 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
-	/* Remember this value, if we asked to */
+	/* Update the statistics, if we asked to */
 	if (wstats != NULL && nworkers > 0)
 		wstats->nplanned += nworkers;
 
@@ -714,7 +711,7 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 			VacuumSharedCostBalance = &(pvs->shared->cost_balance);
 			VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
 
-			/* Remember this value, if we asked to */
+			/* Update the statistics, if we asked to */
 			if (wstats != NULL)
 				wstats->nlaunched += pvs->pcxt->nworkers_launched;
 		}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 1d820915d71..953a506181e 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -301,26 +301,26 @@ typedef struct VacDeadItemsInfo
 } VacDeadItemsInfo;
 
 /*
- * Helper for the PVWorkersUsage structure (see below), to avoid repetition.
+ * Statistics for parallel vacuum workers (planned vs. actual)
  */
-typedef struct PVWorkersStats
+typedef struct PVWorkerStats
 {
-	/* Number of parallel workers we are planned to launch */
+	/* Number of parallel workers planned to launch */
 	int			nplanned;
 
-	/* Number of launched parallel workers */
+	/* Number of parallel workers that were successfully launched */
 	int			nlaunched;
-} PVWorkersStats;
+} PVWorkerStats;
 
 /*
- * PVWorkersUsage stores information about total number of launched and
- * planned workers during parallel vacuum (both for vacuum and cleanup).
+ * PVWorkerUsage stores information about total number of launched and
+ * planned workers during parallel vacuum (both for index vacuum and cleanup).
  */
-typedef struct PVWorkersUsage
+typedef struct PVWorkerUsage
 {
-	PVWorkersStats vacuum;
-	PVWorkersStats cleanup;
-} PVWorkersUsage;
+	PVWorkerStats vacuum;
+	PVWorkerStats cleanup;
+} PVWorkerUsage;
 
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
@@ -417,12 +417,12 @@ extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												PVWorkersUsage *wusage);
+												PVWorkerStats *wstats);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
 												bool estimated_count,
-												PVWorkersUsage *wusage);
+												PVWorkerStats *wstats);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a67d54e1819..4c230ee38ca 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2088,8 +2088,8 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
-PVWorkersUsage
-PVWorkersStats
+PVWorkerUsage
+PVWorkerStats
 PX_Alias
 PX_Cipher
 PX_Combo
-- 
2.43.0

From 94988438f0530b5804ff6515af16dba8d9bd3118 Mon Sep 17 00:00:00 2001
From: Masahiko Sawada <[email protected]>
Date: Mon, 16 Mar 2026 18:01:45 -0700
Subject: [PATCH 7/9] fixup: updates tap tests.

---
 src/backend/commands/vacuumparallel.c         |  9 +--
 .../t/001_parallel_autovacuum.pl              | 63 +++++++++++--------
 2 files changed, 38 insertions(+), 34 deletions(-)

diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index ef36b9bd286..62b6f50b538 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -656,7 +656,7 @@ parallel_vacuum_update_shared_delay_params(void)
 	shared_params_generation_local = params_generation;
 
 	elog(DEBUG2,
-		 "parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+		 "parallel autovacuum worker updated cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
 		 vacuum_cost_limit,
 		 vacuum_cost_delay,
 		 VacuumCostPageMiss,
@@ -933,13 +933,6 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
 			InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
-
-		if (AmAutoVacuumWorkerProcess())
-			elog(DEBUG2,
-				 ngettext("autovacuum worker: finished parallel index processing with %d parallel worker",
-						  "autovacuum worker: finished parallel index processing with %d parallel workers",
-						  nworkers),
-				 nworkers);
 	}
 
 	/*
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
index 9ad87d48b96..2f34999d25e 100644
--- a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -11,8 +11,8 @@ if ($ENV{enable_injection_points} ne 'yes')
 }
 
 # Before each test we should disable autovacuum for 'test_autovac' table and
-# generate some dead tuples in it.
-
+# generate some dead tuples in it. Returns the current autovacuum_count of
+# the table tset_autovac.
 sub prepare_for_next_test
 {
 	my ($node, $test_number) = @_;
@@ -21,12 +21,27 @@ sub prepare_for_next_test
 		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
 		UPDATE test_autovac SET col_1 = $test_number;
 	});
+
+	my $count = $node->safe_psql('postgres', qq{
+		SELECT autovacuum_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+
+	return $count;
 }
 
+# Wait for the table to be vacuumed by an autovacuum worker.
+sub wait_for_autovacuum_complete
+{
+	my ($node, $old_count) = @_;
+
+	$node->poll_query_until('postgres', qq{
+		SELECT autovacuum_count > $old_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+}
 
 my $psql_out;
 
-my $node = PostgreSQL::Test::Cluster->new('node1');
+my $node = PostgreSQL::Test::Cluster->new('main');
 $node->init;
 
 # Configure postgres, so it can launch parallel autovacuum workers, log all
@@ -54,7 +69,7 @@ $node->safe_psql('postgres', qq{
 	CREATE EXTENSION injection_points;
 });
 
-my $indexes_num = 4;
+my $indexes_num = 3;
 my $initial_rows_num = 10_000;
 my $autovacuum_parallel_workers = 2;
 
@@ -91,7 +106,8 @@ $node->safe_psql('postgres', qq{
 # Our table has enough indexes and appropriate reloptions, so autovacuum must
 # be able to process it in parallel mode. Just check if it can do it.
 
-prepare_for_next_test($node, 1);
+my $av_count = prepare_for_next_test($node, 1);
+my $log_offset = -s $node->logfile;
 
 $node->safe_psql('postgres', qq{
 	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
@@ -99,16 +115,16 @@ $node->safe_psql('postgres', qq{
 
 # Wait until the parallel autovacuum on table is completed. At the same time,
 # we check that the required number of parallel workers has been started.
-$log_start = $node->wait_for_log(
-	qr/autovacuum worker: finished parallel index processing with 2 parallel workers/,
-	$log_start
-);
+wait_for_autovacuum_complete($node, $av_count);
+ok($node->log_contains(qr/parallel workers: index vacuum: 2 planned, 2 launched in total/,
+					   $log_offset));
 
 # Test 2:
 # Check whether parallel autovacuum leader can propagate cost-based parameters
 # to the parallel workers.
 
-prepare_for_next_test($node, 2);
+$av_count = prepare_for_next_test($node, 2);
+$log_offset = -s $node->logfile;
 
 $node->safe_psql('postgres', qq{
 	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
@@ -123,8 +139,7 @@ $node->wait_for_event(
 	'autovacuum-start-parallel-vacuum'
 );
 
-# Reload config - leader worker must update its own parameters during indexes
-# processing
+# Update the shared cost-based delay parameters.
 $node->safe_psql('postgres', qq{
 	ALTER SYSTEM SET vacuum_cost_limit = 500;
 	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
@@ -133,12 +148,12 @@ $node->safe_psql('postgres', qq{
 	SELECT pg_reload_conf();
 });
 
+# Resume the leader process to update the shared parameters during heap scan (i.e.
+# vacuum_delay_point() is called) and launch a parallel vacuum worker, but it stops
+# before vacuuming indexes due to the injection point.
 $node->safe_psql('postgres', qq{
 	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
 });
-
-# Now wait until parallel autovacuum leader completes processing table (i.e.
-# guaranteed to call vacuum_delay_point) and launches parallel worker.
 $node->wait_for_event(
 	'autovacuum worker',
 	'autovacuum-leader-before-indexes-processing'
@@ -146,24 +161,20 @@ $node->wait_for_event(
 
 # Check whether parallel worker successfully updated all parameters during
 # index processing
-$log_start = $node->wait_for_log(
-	qr/parallel autovacuum worker cost params: cost_limit=500, cost_delay=2, / .
-	qr/cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
-	$log_start
-);
+$node->wait_for_log(qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=2, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+					$log_offset);
 
-# Cleanup
 $node->safe_psql('postgres', qq{
 	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+wait_for_autovacuum_complete($node, $av_count);
 
+# Cleanup
+$node->safe_psql('postgres', qq{
 	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
 	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
-
-	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = $autovacuum_parallel_workers);
 });
 
-# We were able to get to this point, so everything is fine.
-ok(1);
-
 $node->stop;
 done_testing();
-- 
2.43.0

Reply via email to