Hi,

On Thu, Mar 19, 2026 at 2:49 AM Masahiko Sawada <[email protected]> wrote:
>
> Yes, we already have such a code for PARALLEL option for the VACUUM command.
>
> I guess it's better that autovacuum codes also somewhat follow this
> code for better consistency.
>

I agree. You can find it in the v29-0002 patch.

> > I'm afraid that I can't agree with you here. As I wrote above [1], the
> > parallel a/v feature will be useful when a user has a few huge tables with
> > a big amount of indexes. Only these tables require parallel processing and a
> > user knows about it.
>
> Isn't it a case where users need to increase
> min_parallel_index_scan_size? Suppose that there are two tables that
> are big enough and have enough indexes, it's more natural to me to use
> parallel vacuum for both tables without user manual settings.
>

Do you mean that the user can increase this parameter so that smaller tables
are not considered for the parallel a/v? If so, I don't think it will always
be handy. When I say "smaller tables" I mean that they are small relative to
super huge tables. But actually these "smaller tables" can be pretty big and
require a parallel index scan within parallel queries or VACUUM PARALLEL (not
an autovacuum). Increasing the min_scan_size parameter can decrease
performance of the queries that are relying on the ability to scan indexes
of such tables in parallel. Separated parameter such as
"autovacuum_min_parallel_index_scan_size" could help here, but I don't think
that we want to introduce many new GUC parameters for a single feature.

> > If we implement the feature as you suggested, then after setting the
> > av_max_parallel_workers to N > 0, the user will have to manually disable
> > processing for all tables except the largest ones. This will need to be done
> > to ensure that parallel workers are launched specifically to process the
> > largest tables and not wasting on the processing of little ones.
> >
> > I.e. I'm proposing a design that will require manual actions to *enable*
> > parallel a/v for several large tables rather than *disable* it for all of
> > the rest tables in the cluster. I'm sure that's what users want.
> >
> > Allowing the system to decide which tables to process in parallel is a good
> > way from a design perspective. But I'm thinking of the following example :
> > Imagine that we have a threshold, when exceeded, parallel a/v is used.
> > Several a/v workers encounter tables which exceed this threshold by 1_000 
> > and
> > each of these workers decides to launch a few parallel workers. Another a/v
> > worker encounters a table which is beyond this threshold by 1_000_000 and
> > tries to launch N parallel workers, but facing the max_parallel_workers
> > shortage. Thus, processing of this table will take a very long time to
> > complete due to lack of resources. The only way for users to avoid it is to
> > disable parallel a/v for all tables, which exceeds the threshold and are not
> > of particular interest.
>
> I think the same thing happens even with the current design as long as
> users misconfigure max_parallel_workers, no? Setting
> autovacuum_max_parallel_workers to >0 would mean that users want to
> give additional resources for autovacuums in general, I think it makes
> sense to use parallel vacuum even for tables which exceed the
> threshold by 1000.
>
> Users who want to use parallel autovacuum would have to set
> max_parallel_workers (and max_worker_processes) high enough so that
> each autovacuum worker can use parallel workers. If resource
> contention occurs, it's a sign that the limits are not configured
> properly.
>

Yeah, currently user can misconfigure max_parallel_workers, so (for example)
multiple VACUUM PARALLEL operations running at the same time will face with
a shortage of parallel workers. But I guess that every system has some sane
limit for this parameter's value. If we want to ensure that all a/v leaders
are guaranteed to launch as many parallel workers as required, we might need
to increase the max_parallel_workers too much (and cross the sane limit).
IMHO it may be unacceptable for many systems in production, because it will
undermine the stability.

I don't have direct evidence of my words, so I'll try to get the opinion of
the people who will use the parallel a/v feature in big productions.

> > I'm not sure if this phrase will be understandable to the user.
> > I don't see any places where we would define the "autovacuum operation"
> > concept, so I suppose it could be ambiguous. What about "Maximum number of
> > parallel processes per autovacuuming of one table"?
>
> "autovacuuming of one table" sounds unnatural to me. How about
> "Maximum number of parallel workers that can be used by a single
> autovacuum worker."?
>

It sounds good, I agree.

> >
> > > We check only the server logs throughout the new tap tests. I think we
> > > should also confirm that the autovacuum successfully completes. I've
> > > attached the proposed change to the tap tests.
> > >
> >
> > I agree with proposed changes. BTW, don't we need to reduce the strings
> > length to 80 characters in the tests? In some tests, this rule is followed,
> > and in some it is not.
>
> Yeah, pgperltidy should be run for new tests.
>

OK. I'll do it.

> The 0001 patch looks good to me. I've updated the commit message and
> attached it. I'm going to push the patch, barring any objections.
>

Great news!

> Regarding the documentation changes, I find that the current patch
> needs more explanation at appropriate sections. I think we need to:
>
> 1. describe the new autovacuum_max_parallel_workers GUC parameter (in
> config.sgml)
> 2. describe the new autovacuum_parallel_workers storage parameter (in
> create_table.sgml)
> 3. mention that autovacuum could use parallel vacuum (in maintenance.sgml).
>

I agree.

> I think that part 1 should include the basic explanation of the GUC
> parameter as well as how the number of workers is decided (which could
> be similar to the description for PARALLEL options of the VACUUM
> command).

IMHO, the description of the method for determining the number of parallel
workers will look more appropriate in part 3.

BTW, do we need to mention that this parameter can be overridden by the
per-table setting?

> Part 2 can explain the storage parameter as follow:
>
>   Per-table value for <xref linkend="guc-autovacuum-max-parallel-workers"/>
>   parameter. If -1 is specified,
> <varname>autovacuum_max_parallel_workers</varname>
>   value will be used. The default value is 0.
>

It looks very compact and beautiful, I agree.
Actually, if -1 is specified then we are "choosing the parallel degree based
on the number of indexes". We have several places in the code with such
phrasing. I don't really like it because 1) even if value != -1 we are still
taking the number of indexes into account and 2) basically it is the same as
to say "limited by GUC parameter". I don't want to touch existing comments
in the vacuumparallel.c but in our patch I'd like to say that "GUC parameter's
value will be used". I hope this will not cause any misunderstanding among
readers.

> Part 3 can briefly mention that autovacuum can perform parallel vacuum
> with parallel workers capped by autovacuum_max_parallel_workers as
> follow:
>
>   For tables with the <xref linkend="reloption-autovacuum-parallel-workers"/>
>   storage parameter set, an autovacuum worker can perform index vacuuming and
>   index cleanup with background workers. The number of workers launched by
>   a single autovacuum worker is limited by the
>   <xref linkend="guc-autovacuum-max-parallel-workers"/>.

I suggest adding here also a description of the method for calculating the
number of parallel workers. If so, I feel that this part of documentation will
be completely the same as in VACUUM PARALLEL (except a few little details).
Maybe we can create some dedicated subchapter in the "Routine vacuuming" where
we describe how the number of parallel workers is decided. Lets call it
something like "24.1.7 Parallel Vacuuming". Both VACUUM PARALLEL and parallel
autovacuum can refer to this subchapter. I think it will be much easier to
maintain. What do you think?

--

Thank you very much for the comments and prepared patch!
Please, see an updated set of patches (I didn't touch patches 0001, 0003 and
0005).

The 0001 patch contains a pretty controversial fix for the
"autovacuum_parallel_workers" description, but I didn't come up with anything
better.

--
Best regards,
Daniil Davydov
From 31812fa9b922bad041ceb90a6ff6e0814a5e1f77 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 15 Jan 2026 23:15:48 +0700
Subject: [PATCH v30 3/5] Cost based parameters propagation for parallel
 autovacuum

---
 src/backend/commands/vacuum.c         |  21 +++-
 src/backend/commands/vacuumparallel.c | 163 ++++++++++++++++++++++++++
 src/backend/postmaster/autovacuum.c   |   2 +-
 src/include/commands/vacuum.h         |   2 +
 src/tools/pgindent/typedefs.list      |   1 +
 5 files changed, 186 insertions(+), 3 deletions(-)

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index bce3a2daa24..1b5ba3ce1ef 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2435,8 +2435,19 @@ vacuum_delay_point(bool is_analyze)
 	/* Always check for interrupts */
 	CHECK_FOR_INTERRUPTS();
 
-	if (InterruptPending ||
-		(!VacuumCostActive && !ConfigReloadPending))
+	if (InterruptPending)
+		return;
+
+	if (IsParallelWorker())
+	{
+		/*
+		 * Update cost-based vacuum delay parameters for a parallel autovacuum
+		 * worker if any changes are detected.
+		 */
+		parallel_vacuum_update_shared_delay_params();
+	}
+
+	if (!VacuumCostActive && !ConfigReloadPending)
 		return;
 
 	/*
@@ -2450,6 +2461,12 @@ vacuum_delay_point(bool is_analyze)
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
 		VacuumUpdateCosts();
+
+		/*
+		 * Propagate cost-based vacuum delay parameters to shared memory if
+		 * any of them have changed during the config reload.
+		 */
+		parallel_vacuum_propagate_shared_delay_params();
 	}
 
 	/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index b7ffd854009..98aeb66eec4 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -18,6 +18,13 @@
  * the parallel context is re-initialized so that the same DSM can be used for
  * multiple passes of index bulk-deletion and index cleanup.
  *
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
  * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
@@ -53,6 +60,31 @@
 #define PARALLEL_VACUUM_KEY_WAL_USAGE		4
 #define PARALLEL_VACUUM_KEY_INDEX_STATS		5
 
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+	/*
+	 * The generation counter is incremented by the leader process each time
+	 * it updates the shared cost-based vacuum delay parameters. Paralell
+	 * vacuum workers compares it with their local generation,
+	 * shared_params_generation_local, to detect whether they need to refresh
+	 * their local parameters.
+	 */
+	pg_atomic_uint32 generation;
+
+	slock_t		mutex;			/* protects all fields below */
+
+	/* Parameters to share with parallel workers */
+	double		cost_delay;
+	int			cost_limit;
+	int			cost_page_dirty;
+	int			cost_page_hit;
+	int			cost_page_miss;
+} PVSharedCostParams;
+
 /*
  * Shared information among parallel workers.  So this is allocated in the DSM
  * segment.
@@ -122,6 +154,18 @@ typedef struct PVShared
 
 	/* Statistics of shared dead items */
 	VacDeadItemsInfo dead_items_info;
+
+	/*
+	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
+	 * running parallel maintenence VACUUM.
+	 */
+	bool		is_autovacuum;
+
+	/*
+	 * Struct for syncing cost-based vacuum delay parameters between
+	 * supportive parallel autovacuum workers with leader worker.
+	 */
+	PVSharedCostParams cost_params;
 } PVShared;
 
 /* Status used during parallel index vacuum or cleanup */
@@ -224,6 +268,11 @@ struct ParallelVacuumState
 	PVIndVacStatus status;
 };
 
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/* See comments in the PVSharedCostParams for the details */
+static uint32 shared_params_generation_local = 0;
+
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
@@ -235,6 +284,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation
 static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
 
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
@@ -395,6 +445,21 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
 
+	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+	/*
+	 * Initialize shared cost-based vacuum delay parameters if it's for
+	 * autovacuum.
+	 */
+	if (shared->is_autovacuum)
+	{
+		parallel_vacuum_set_cost_parameters(&shared->cost_params);
+		pg_atomic_init_u32(&shared->cost_params.generation, 0);
+		SpinLockInit(&shared->cost_params.mutex);
+
+		pv_shared_cost_params = &(shared->cost_params);
+	}
+
 	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
 	pvs->shared = shared;
 
@@ -460,6 +525,9 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
 	DestroyParallelContext(pvs->pcxt);
 	ExitParallelMode();
 
+	if (AmAutoVacuumWorkerProcess())
+		pv_shared_cost_params = NULL;
+
 	pfree(pvs->will_parallel_vacuum);
 	pfree(pvs);
 }
@@ -537,6 +605,95 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats);
 }
 
+/*
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+	params->cost_delay = vacuum_cost_delay;
+	params->cost_limit = vacuum_cost_limit;
+	params->cost_page_dirty = VacuumCostPageDirty;
+	params->cost_page_hit = VacuumCostPageHit;
+	params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
+ *
+ * For non-autovacuum parallel worker this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+	uint32		params_generation;
+
+	Assert(IsParallelWorker());
+
+	/* Quick return if the wokrer is not running for the autovacuum */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+	Assert(shared_params_generation_local <= params_generation);
+
+	/* Return if parameters had not changed in the leader */
+	if (params_generation == shared_params_generation_local)
+		return;
+
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	VacuumCostDelay = pv_shared_cost_params->cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	VacuumUpdateCosts();
+
+	shared_params_generation_local = params_generation;
+}
+
+/*
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/*
+	 * Quick return if the leader process is not sharing the delay parameters.
+	 */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	/*
+	 * Check if any delay parameters has changed. We can read them without
+	 * locks as only the leader can modify them.
+	 */
+	if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+		vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+		VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+		VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+		VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
+		return;
+
+	/* Update the shared delay parameters */
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	/*
+	 * Increment the generation of the parameters, i.e. let parallel workers
+	 * know that they should re-read shared cost params.
+	 */
+	pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
 /*
  * Compute the number of parallel worker processes to request.  Both index
  * vacuum and index cleanup can be executed with parallel workers.
@@ -1078,6 +1235,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	VacuumSharedCostBalance = &(shared->cost_balance);
 	VacuumActiveNWorkers = &(shared->active_nworkers);
 
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = &(shared->cost_params);
+
 	/* Set parallel vacuum state */
 	pvs.indrels = indrels;
 	pvs.nindexes = nindexes;
@@ -1127,6 +1287,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	vac_close_indexes(nindexes, indrels, RowExclusiveLock);
 	table_close(rel, ShareUpdateExclusiveLock);
 	FreeAccessStrategy(pvs.bstrategy);
+
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = NULL;
 }
 
 /*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index e810e1303db..f0535a0997f 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1659,7 +1659,7 @@ VacuumUpdateCosts(void)
 	}
 	else
 	{
-		/* Must be explicit VACUUM or ANALYZE */
+		/* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
 		vacuum_cost_delay = VacuumCostDelay;
 		vacuum_cost_limit = VacuumCostLimit;
 	}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 953a506181e..cc154737115 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -423,6 +423,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												int num_index_scans,
 												bool estimated_count,
 												PVWorkerStats *wstats);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a4a2ed07816..d5c7b91e167 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2090,6 +2090,7 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVSharedCostParams
 PVWorkerUsage
 PVWorkerStats
 PX_Alias
-- 
2.43.0

From b3409be9b386a2ddd4778c34bd71eca34bf48332 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 02:18:09 +0700
Subject: [PATCH v30 2/5] Parallel autovacuum

---
 src/backend/access/common/reloptions.c        | 11 ++++++++++
 src/backend/commands/vacuumparallel.c         | 20 +++++++++++++------
 src/backend/postmaster/autovacuum.c           | 18 +++++++++++++++--
 src/backend/utils/init/globals.c              |  1 +
 src/backend/utils/misc/guc.c                  |  8 ++++++--
 src/backend/utils/misc/guc_parameters.dat     |  8 ++++++++
 src/backend/utils/misc/postgresql.conf.sample |  1 +
 src/bin/psql/tab-complete.in.c                |  1 +
 src/include/miscadmin.h                       |  1 +
 src/include/utils/rel.h                       |  2 ++
 10 files changed, 61 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 237ab8d0ed9..03e6fae930e 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -235,6 +235,15 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Overrides value of the autovacuum_max_parallel_workers parameter for this table, if > -1.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		0, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1968,6 +1977,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 77834b96a21..b7ffd854009 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  comments below, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -374,8 +376,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -555,12 +558,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_workers;
+
+	max_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_workers == 0)
 		return 0;
 
 	/*
@@ -599,8 +607,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_workers);
 
 	return parallel_workers;
 }
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 219673db930..e810e1303db 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2798,6 +2798,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			multixact_freeze_table_age;
 		int			log_vacuum_min_duration;
 		int			log_analyze_min_duration;
+		int			nparallel_workers = -1; /* disabled by default */
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2858,8 +2859,20 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
+		/* Decide whether we need to process indexes of table in parallel. */
+		if (avopts)
+		{
+			if (avopts->autovacuum_parallel_workers > 0)
+				nparallel_workers = avopts->autovacuum_parallel_workers;
+			else if (avopts->autovacuum_parallel_workers == -1)
+			{
+				nparallel_workers = autovacuum_max_parallel_workers > 0
+					? autovacuum_max_parallel_workers
+					: -1; /* disable parallelism if parameter's value is 0 */
+			}
+		}
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -2868,6 +2881,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		tab->at_params.log_vacuum_min_duration = log_vacuum_min_duration;
 		tab->at_params.log_analyze_min_duration = log_analyze_min_duration;
 		tab->at_params.toast_parent = InvalidOid;
+		tab->at_params.nworkers = nparallel_workers;
 
 		/*
 		 * Later, in vacuum_rel(), we check reloptions for any
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..8265a82b639 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int			autovacuum_max_parallel_workers = 2;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index e1546d9c97a..45b39b7c47f 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3358,9 +3358,13 @@ set_config_with_handle(const char *name, config_handle *handle,
 	 *
 	 * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
 	 *
-	 * Other changes might need to affect other workers, so forbid them.
+	 * Other changes might need to affect other workers, so forbid them. Note,
+	 * that parallel autovacuum leader is an exception, because only
+	 * cost-based delays need to be affected also to parallel autovacuum
+	 * workers, and we will handle it elsewhere if appropriate.
 	 */
-	if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+	if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+		action != GUC_ACTION_SAVE &&
 		(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 0c9854ad8fc..3d2fd35a004 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -154,6 +154,14 @@
   max => '2000000000',
 },
 
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+  short_desc => 'Maximum number of parallel workers that can be used by a single autovacuum worker.',
+  variable => 'autovacuum_max_parallel_workers',
+  boot_val => '2',
+  min => '0',
+  max => 'MAX_PARALLEL_WORKER_LIMIT',
+},
+
 { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
   short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
   variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index e4abe6c0077..11d96f4dd4f 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -713,6 +713,7 @@
 #autovacuum_worker_slots = 16           # autovacuum worker slots to allocate
                                         # (change requires restart)
 #autovacuum_max_workers = 3             # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 2    # limited by max_parallel_workers
 #autovacuum_naptime = 1min              # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50       # min number of row updates before
                                         # vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 5bdbf1530a2..29171efbc1b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1432,6 +1432,7 @@ static const char *const table_storage_parameters[] = {
 	"autovacuum_multixact_freeze_max_age",
 	"autovacuum_multixact_freeze_min_age",
 	"autovacuum_multixact_freeze_table_age",
+	"autovacuum_parallel_workers",
 	"autovacuum_vacuum_cost_delay",
 	"autovacuum_vacuum_cost_limit",
 	"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index f16f35659b9..00190c67ecf 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..cd1e92f2302 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,8 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+
+	int			autovacuum_parallel_workers;
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

From dc0181fd585ae46c55f209ea6f56cbdff40b35de Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 03:23:38 +0700
Subject: [PATCH v30 5/5] Documentation for parallel autovacuum

---
 doc/src/sgml/config.sgml           | 18 ++++++++++++++++++
 doc/src/sgml/maintenance.sgml      | 12 ++++++++++++
 doc/src/sgml/ref/create_table.sgml | 21 +++++++++++++++++++++
 3 files changed, 51 insertions(+)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 8cdd826fbd3..7741796c6b0 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2918,6 +2918,7 @@ include_dir 'conf.d'
         <para>
          When changing this value, consider also adjusting
          <xref linkend="guc-max-parallel-workers"/>,
+         <xref linkend="guc-autovacuum-max-parallel-workers"/>,
          <xref linkend="guc-max-parallel-maintenance-workers"/>, and
          <xref linkend="guc-max-parallel-workers-per-gather"/>.
         </para>
@@ -9395,6 +9396,23 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+        <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+        <indexterm>
+         <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+         <secondary>configuration parameter</secondary>
+        </indexterm>
+        </term>
+        <listitem>
+         <para>
+          Sets the maximum number of parallel autovacuum workers that
+          can be used for parallel index vacuuming at one time by a single
+          autovacuum worker. Is capped by <xref linkend="guc-max-parallel-workers"/>.
+          The default is 2.
+         </para>
+        </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
 
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 7c958b06273..f2a280db569 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -926,6 +926,18 @@ HINT:  Execute a database-wide VACUUM in that database.
     autovacuum workers' activity.
    </para>
 
+   <para>
+    If an autovacuum worker process comes across a table with the enabled
+    <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+    it will launch parallel workers in order to vacuum indexes of this table
+    in a parallel mode. Parallel workers are taken from the pool of processes
+    established by <xref linkend="guc-max-worker-processes"/>, limited by
+    <xref linkend="guc-max-parallel-workers"/>.
+    The number of parallel workers that can be taken from pool by a single
+    autovacuum worker is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+    configuration parameter.
+   </para>
+
    <para>
     If several large tables all become eligible for vacuuming in a short
     amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 982532fe725..e367310a571 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1718,6 +1718,27 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+  <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+    <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+    <indexterm>
+     <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+    </indexterm>
+    </term>
+    <listitem>
+     <para>
+      Sets the maximum number of parallel autovacuum workers that can process
+      indexes of this table.
+      The default value is 0, which means no parallel index vacuuming for
+      this table. If value is -1 then parallel degree will computed based on
+      number of indexes and limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+      parameter.
+      Note that the computed number of workers may not actually be available at
+      run time. If this occurs, the autovacuum will run with fewer workers
+      than expected.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
     <term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
     <indexterm>
-- 
2.43.0

From 9941505b9dedb447de37940793e68999c06e7be7 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Mon, 16 Mar 2026 19:01:05 +0700
Subject: [PATCH v30 1/5] Add parallel vacuum worker usage to VACUUM (VERBOSE)
 and autovacuum logs.

This commit adds both the number of parallel workers planned and the
number of parallel workers actually launched to the output of
VACUUM (VERBOSE) and autovacuum logs.

Previously, this information was only reported as an INFO message
during VACUUM (VERBOSE), which meant it was not included in autovacuum
logs in practice. Although autovacuum does not yet support parallel
vacuum, a subsequent patch will enable it and utilize these logs in
its regression tests. This change also improves observability by
making it easier to verify if parallel vacuum is utilizing the
expected number of workers.

Author: Daniil Davydov <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Reviewed-by: Sami Imseih <[email protected]>
Discussion: https://postgr.es/m/CACG=ezZOrNsuLoETLD1gAswZMuH2nGGq7Ogcc0QOE5hhWaw=c...@mail.gmail.com
---
 src/backend/access/heap/vacuumlazy.c  | 31 +++++++++++++++++++++++++--
 src/backend/commands/vacuumparallel.c | 23 ++++++++++++++------
 src/include/commands/vacuum.h         | 28 ++++++++++++++++++++++--
 src/tools/pgindent/typedefs.list      |  2 ++
 4 files changed, 74 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 82c5b28e0ad..c57432670e7 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -343,6 +343,13 @@ typedef struct LVRelState
 	int			num_index_scans;
 	int			num_dead_items_resets;
 	Size		total_dead_items_bytes;
+
+	/*
+	 * Total number of planned and actually launched parallel workers for
+	 * index vacuuming and index cleanup.
+	 */
+	PVWorkerUsage worker_usage;
+
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
 	int64		tuples_frozen;	/* # newly frozen */
@@ -781,6 +788,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	vacrel->new_all_visible_all_frozen_pages = 0;
 	vacrel->new_all_frozen_pages = 0;
 
+	vacrel->worker_usage.vacuum.nlaunched = 0;
+	vacrel->worker_usage.vacuum.nplanned = 0;
+	vacrel->worker_usage.cleanup.nlaunched = 0;
+	vacrel->worker_usage.cleanup.nplanned = 0;
+
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
 	 * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze.  Then determine
@@ -1123,6 +1135,19 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 orig_rel_pages == 0 ? 100.0 :
 							 100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
 							 vacrel->lpdead_items);
+
+			if (vacrel->worker_usage.vacuum.nplanned > 0)
+				appendStringInfo(&buf,
+								 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
+								 vacrel->worker_usage.vacuum.nplanned,
+								 vacrel->worker_usage.vacuum.nlaunched);
+
+			if (vacrel->worker_usage.cleanup.nplanned > 0)
+				appendStringInfo(&buf,
+								 _("parallel workers: index cleanup: %d planned, %d launched\n"),
+								 vacrel->worker_usage.cleanup.nplanned,
+								 vacrel->worker_usage.cleanup.nlaunched);
+
 			for (int i = 0; i < vacrel->nindexes; i++)
 			{
 				IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2669,7 +2694,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	{
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
-											vacrel->num_index_scans);
+											vacrel->num_index_scans,
+											&(vacrel->worker_usage.vacuum));
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3103,7 +3129,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
-											estimated_count);
+											estimated_count,
+											&(vacrel->worker_usage.cleanup));
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 279108ca89f..77834b96a21 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -225,7 +225,7 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum);
+												bool vacuum, PVWorkerStats *wstats);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -499,7 +499,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans)
+									int num_index_scans, PVWorkerStats *wstats)
 {
 	Assert(!IsParallelWorker());
 
@@ -510,7 +510,7 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true, wstats);
 }
 
 /*
@@ -518,7 +518,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
  */
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, bool estimated_count)
+									int num_index_scans, bool estimated_count,
+									PVWorkerStats *wstats)
 {
 	Assert(!IsParallelWorker());
 
@@ -530,7 +531,7 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats);
 }
 
 /*
@@ -607,10 +608,12 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 /*
  * Perform index vacuum or index cleanup with parallel workers.  This function
  * must be used by the parallel vacuum leader process.
+ *
+ * If wstats is not NULL, the parallel worker statistics are updated.
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum)
+									bool vacuum, PVWorkerStats *wstats)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -647,6 +650,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/* Update the statistics, if we asked to */
+	if (wstats != NULL && nworkers > 0)
+		wstats->nplanned += nworkers;
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -703,6 +710,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 			/* Enable shared cost balance for leader backend */
 			VacuumSharedCostBalance = &(pvs->shared->cost_balance);
 			VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+			/* Update the statistics, if we asked to */
+			if (wstats != NULL)
+				wstats->nlaunched += pvs->pcxt->nworkers_launched;
 		}
 
 		if (vacuum)
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index e885a4b9c77..953a506181e 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -300,6 +300,28 @@ typedef struct VacDeadItemsInfo
 	int64		num_items;		/* current # of entries */
 } VacDeadItemsInfo;
 
+/*
+ * Statistics for parallel vacuum workers (planned vs. actual)
+ */
+typedef struct PVWorkerStats
+{
+	/* Number of parallel workers planned to launch */
+	int			nplanned;
+
+	/* Number of parallel workers that were successfully launched */
+	int			nlaunched;
+} PVWorkerStats;
+
+/*
+ * PVWorkerUsage stores information about total number of launched and
+ * planned workers during parallel vacuum (both for index vacuum and cleanup).
+ */
+typedef struct PVWorkerUsage
+{
+	PVWorkerStats vacuum;
+	PVWorkerStats cleanup;
+} PVWorkerUsage;
+
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
 extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -394,11 +416,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
 extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
-												int num_index_scans);
+												int num_index_scans,
+												PVWorkerStats *wstats);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												bool estimated_count);
+												bool estimated_count,
+												PVWorkerStats *wstats);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 4673eca9cd6..a4a2ed07816 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2090,6 +2090,8 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVWorkerUsage
+PVWorkerStats
 PX_Alias
 PX_Cipher
 PX_Combo
-- 
2.43.0

From 8becbcbb62c89537955de3edb5e7a568c244b631 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 02:50:23 +0700
Subject: [PATCH v30 4/5] Tests for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c          |   9 +
 src/backend/commands/vacuumparallel.c         |  18 ++
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/test_autovacuum/.gitignore   |   2 +
 src/test/modules/test_autovacuum/Makefile     |  20 ++
 src/test/modules/test_autovacuum/meson.build  |  15 ++
 .../t/001_parallel_autovacuum.pl              | 191 ++++++++++++++++++
 8 files changed, 257 insertions(+)
 create mode 100644 src/test/modules/test_autovacuum/.gitignore
 create mode 100644 src/test/modules/test_autovacuum/Makefile
 create mode 100644 src/test/modules/test_autovacuum/meson.build
 create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index c57432670e7..8d2980f3ef0 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -152,6 +152,7 @@
 #include "storage/latch.h"
 #include "storage/lmgr.h"
 #include "storage/read_stream.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_rusage.h"
 #include "utils/timestamp.h"
@@ -873,6 +874,14 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	lazy_check_wraparound_failsafe(vacrel);
 	dead_items_alloc(vacrel, params.nworkers);
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * Trigger injection point, if parallel autovacuum is about to be started.
+	 */
+	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
 	/*
 	 * Call lazy_scan_heap to perform all required heap pruning, index
 	 * vacuuming, and heap vacuuming (plus related processing)
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 98aeb66eec4..62b6f50b538 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -46,6 +46,7 @@
 #include "storage/bufmgr.h"
 #include "storage/proc.h"
 #include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 
@@ -653,6 +654,14 @@ parallel_vacuum_update_shared_delay_params(void)
 	VacuumUpdateCosts();
 
 	shared_params_generation_local = params_generation;
+
+	elog(DEBUG2,
+		 "parallel autovacuum worker updated cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
 }
 
 /*
@@ -895,6 +904,15 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 							pvs->pcxt->nworkers_launched, nworkers)));
 	}
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * This injection point is used to wait until parallel autovacuum workers
+	 * finishes their part of index processing.
+	 */
+	if (nworkers > 0)
+		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
+
 	/* Vacuum the indexes that can be processed by only leader process */
 	parallel_vacuum_process_unsafe_indexes(pvs);
 
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 28ce3b35eda..336a212faf4 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
 		  plsample \
 		  spgist_name_ops \
 		  test_aio \
+		  test_autovacuum \
 		  test_binaryheap \
 		  test_bitmapset \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 3ac291656c1..929659956cb 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
 subdir('test_aio')
+subdir('test_autovacuum')
 subdir('test_binaryheap')
 subdir('test_bitmapset')
 subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..188ec9f96a2
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,20 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..86e392bc0de
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,15 @@
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
+
+tests += {
+  'name': 'test_autovacuum',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'env': {
+       'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+    },
+    'tests': [
+      't/001_parallel_autovacuum.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
new file mode 100644
index 00000000000..0364019d5f0
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -0,0 +1,191 @@
+# Test parallel autovacuum behavior
+
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+	plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it. Returns the current autovacuum_count of
+# the table tset_autovac.
+sub prepare_for_next_test
+{
+	my ($node, $test_number) = @_;
+
+	$node->safe_psql(
+		'postgres', qq{
+		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+		UPDATE test_autovac SET col_1 = $test_number;
+	});
+
+	my $count = $node->safe_psql(
+		'postgres', qq{
+		SELECT autovacuum_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+
+	return $count;
+}
+
+# Wait for the table to be vacuumed by an autovacuum worker.
+sub wait_for_autovacuum_complete
+{
+	my ($node, $old_count) = @_;
+
+	$node->poll_query_until(
+		'postgres', qq{
+		SELECT autovacuum_count > $old_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
+	});
+}
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('main');
+$node->init;
+
+# Configure postgres, so it can launch parallel autovacuum workers, log all
+# information we are interested in and autovacuum works frequently
+$node->append_conf(
+	'postgresql.conf', qq{
+	max_worker_processes = 20
+	max_parallel_workers = 20
+	autovacuum_max_parallel_workers = 4
+	log_min_messages = debug2
+	autovacuum_naptime = '1s'
+	min_parallel_index_scan_size = 0
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+	plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql(
+	'postgres', qq{
+	CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 3;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql(
+	'postgres', qq{
+	CREATE TABLE test_autovac (
+		id SERIAL PRIMARY KEY,
+		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
+	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+			log_autovacuum_min_duration = 0);
+
+	INSERT INTO test_autovac
+	SELECT
+		g AS col1,
+		g + 1 AS col2,
+		g + 2 AS col3,
+		g + 3 AS col4
+	FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql(
+	'postgres', qq{
+	DO \$\$
+	DECLARE
+		i INTEGER;
+	BEGIN
+		FOR i IN 1..$indexes_num LOOP
+			EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+		END LOOP;
+	END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can do it.
+
+my $av_count = prepare_for_next_test($node, 1);
+my $log_offset = -s $node->logfile;
+
+$node->safe_psql(
+	'postgres', qq{
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+wait_for_autovacuum_complete($node, $av_count);
+ok( $node->log_contains(
+		qr/parallel workers: index vacuum: 2 planned, 2 launched in total/,
+		$log_offset));
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to the parallel workers.
+
+$av_count = prepare_for_next_test($node, 2);
+$log_offset = -s $node->logfile;
+
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event('autovacuum worker',
+	'autovacuum-start-parallel-vacuum');
+
+# Update the shared cost-based delay parameters.
+$node->safe_psql(
+	'postgres', qq{
+	ALTER SYSTEM SET vacuum_cost_limit = 500;
+	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+	ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+	ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+	SELECT pg_reload_conf();
+});
+
+# Resume the leader process to update the shared parameters during heap scan (i.e.
+# vacuum_delay_point() is called) and launch a parallel vacuum worker, but it stops
+# before vacuuming indexes due to the injection point.
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+$node->wait_for_event('autovacuum worker',
+	'autovacuum-leader-before-indexes-processing');
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing
+$node->wait_for_log(
+	qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=2, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+	$log_offset);
+
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+wait_for_autovacuum_complete($node, $av_count);
+
+# Cleanup
+$node->safe_psql(
+	'postgres', qq{
+	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+$node->stop;
+done_testing();
-- 
2.43.0

From 2490e0f492096d33f56eb8d0a2a3da35434dfa1a Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 19 Mar 2026 21:25:40 +0700
Subject: [PATCH] fixes for 0004

---
 .../t/001_parallel_autovacuum.pl              | 61 +++++++++++--------
 1 file changed, 36 insertions(+), 25 deletions(-)

diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
index 2f34999d25e..0364019d5f0 100644
--- a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -17,12 +17,14 @@ sub prepare_for_next_test
 {
 	my ($node, $test_number) = @_;
 
-	$node->safe_psql('postgres', qq{
+	$node->safe_psql(
+		'postgres', qq{
 		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
 		UPDATE test_autovac SET col_1 = $test_number;
 	});
 
-	my $count = $node->safe_psql('postgres', qq{
+	my $count = $node->safe_psql(
+		'postgres', qq{
 		SELECT autovacuum_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
 	});
 
@@ -34,7 +36,8 @@ sub wait_for_autovacuum_complete
 {
 	my ($node, $old_count) = @_;
 
-	$node->poll_query_until('postgres', qq{
+	$node->poll_query_until(
+		'postgres', qq{
 		SELECT autovacuum_count > $old_count FROM pg_stat_user_tables WHERE relname = 'test_autovac'
 	});
 }
@@ -46,7 +49,8 @@ $node->init;
 
 # Configure postgres, so it can launch parallel autovacuum workers, log all
 # information we are interested in and autovacuum works frequently
-$node->append_conf('postgresql.conf', qq{
+$node->append_conf(
+	'postgresql.conf', qq{
 	max_worker_processes = 20
 	max_parallel_workers = 20
 	autovacuum_max_parallel_workers = 4
@@ -65,7 +69,8 @@ if (!$node->check_extension('injection_points'))
 }
 
 # Create all functions needed for testing
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	CREATE EXTENSION injection_points;
 });
 
@@ -74,7 +79,8 @@ my $initial_rows_num = 10_000;
 my $autovacuum_parallel_workers = 2;
 
 # Create table and fill it with some data
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	CREATE TABLE test_autovac (
 		id SERIAL PRIMARY KEY,
 		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
@@ -91,7 +97,8 @@ $node->safe_psql('postgres', qq{
 });
 
 # Create specified number of b-tree indexes on the table
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	DO \$\$
 	DECLARE
 		i INTEGER;
@@ -109,15 +116,17 @@ $node->safe_psql('postgres', qq{
 my $av_count = prepare_for_next_test($node, 1);
 my $log_offset = -s $node->logfile;
 
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
 });
 
 # Wait until the parallel autovacuum on table is completed. At the same time,
 # we check that the required number of parallel workers has been started.
 wait_for_autovacuum_complete($node, $av_count);
-ok($node->log_contains(qr/parallel workers: index vacuum: 2 planned, 2 launched in total/,
-					   $log_offset));
+ok( $node->log_contains(
+		qr/parallel workers: index vacuum: 2 planned, 2 launched in total/,
+		$log_offset));
 
 # Test 2:
 # Check whether parallel autovacuum leader can propagate cost-based parameters
@@ -126,7 +135,8 @@ ok($node->log_contains(qr/parallel workers: index vacuum: 2 planned, 2 launched
 $av_count = prepare_for_next_test($node, 2);
 $log_offset = -s $node->logfile;
 
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
 	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
 
@@ -134,13 +144,12 @@ $node->safe_psql('postgres', qq{
 });
 
 # Wait until parallel autovacuum is inited
-$node->wait_for_event(
-	'autovacuum worker',
-	'autovacuum-start-parallel-vacuum'
-);
+$node->wait_for_event('autovacuum worker',
+	'autovacuum-start-parallel-vacuum');
 
 # Update the shared cost-based delay parameters.
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	ALTER SYSTEM SET vacuum_cost_limit = 500;
 	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
 	ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
@@ -151,27 +160,29 @@ $node->safe_psql('postgres', qq{
 # Resume the leader process to update the shared parameters during heap scan (i.e.
 # vacuum_delay_point() is called) and launch a parallel vacuum worker, but it stops
 # before vacuuming indexes due to the injection point.
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
 });
-$node->wait_for_event(
-	'autovacuum worker',
-	'autovacuum-leader-before-indexes-processing'
-);
+$node->wait_for_event('autovacuum worker',
+	'autovacuum-leader-before-indexes-processing');
 
 # Check whether parallel worker successfully updated all parameters during
 # index processing
-$node->wait_for_log(qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=2, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
-					$log_offset);
+$node->wait_for_log(
+	qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=2, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+	$log_offset);
 
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
 });
 
 wait_for_autovacuum_complete($node, $av_count);
 
 # Cleanup
-$node->safe_psql('postgres', qq{
+$node->safe_psql(
+	'postgres', qq{
 	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
 	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
 });
-- 
2.43.0

From 029354cf40fea428c20de09ccacc6afe503c73f6 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 19 Mar 2026 21:19:35 +0700
Subject: [PATCH] fixes for 0002

---
 src/backend/access/common/reloptions.c    | 2 +-
 src/backend/postmaster/autovacuum.c       | 6 +++++-
 src/backend/utils/misc/guc_parameters.dat | 2 +-
 src/include/utils/rel.h                   | 7 -------
 4 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 055585c38f3..03e6fae930e 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -238,7 +238,7 @@ static relopt_int intRelOpts[] =
 	{
 		{
 			"autovacuum_parallel_workers",
-			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
+			"Overrides value of the autovacuum_max_parallel_workers parameter for this table, if > -1.",
 			RELOPT_KIND_HEAP,
 			ShareUpdateExclusiveLock
 		},
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index ff57d8fca2a..e810e1303db 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2866,7 +2866,11 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			if (avopts->autovacuum_parallel_workers > 0)
 				nparallel_workers = avopts->autovacuum_parallel_workers;
 			else if (avopts->autovacuum_parallel_workers == -1)
-				nparallel_workers = 0;
+			{
+				nparallel_workers = autovacuum_max_parallel_workers > 0
+					? autovacuum_max_parallel_workers
+					: -1; /* disable parallelism if parameter's value is 0 */
+			}
 		}
 
 		tab->at_params.freeze_min_age = freeze_min_age;
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index bc23ddf5201..3d2fd35a004 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -155,7 +155,7 @@
 },
 
 { name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
-  short_desc => 'Maximum number of parallel processes per autovacuuming of one table.',
+  short_desc => 'Maximum number of parallel workers that can be used by a single autovacuum worker.',
   variable => 'autovacuum_max_parallel_workers',
   boot_val => '2',
   min => '0',
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 1981954008e..cd1e92f2302 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -312,14 +312,7 @@ typedef struct AutoVacOpts
 {
 	bool		enabled;
 
-	/*
-	 * Target number of parallel autovacuum workers. 0 by default disables
-	 * parallel vacuum during autovacuum. -1 means choose the parallel degree
-	 * based on the number of indexes (the autovacuum_max_parallel_workers
-	 * parameter will be used as a limit).
-	 */
 	int			autovacuum_parallel_workers;
-
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

Reply via email to