On Wed, Mar 25, 2026 at 12:45 AM Daniil Davydov <[email protected]> wrote: > > Hi, > > > > Yeah, currently user can misconfigure max_parallel_workers, so (for > > > example) > > > multiple VACUUM PARALLEL operations running at the same time will face > > > with > > > a shortage of parallel workers. But I guess that every system has some > > > sane > > > limit for this parameter's value. If we want to ensure that all a/v > > > leaders > > > are guaranteed to launch as many parallel workers as required, we might > > > need > > > to increase the max_parallel_workers too much (and cross the sane limit). > > > IMHO it may be unacceptable for many systems in production, because it > > > will > > > undermine the stability. > > > > I understand the concern that if max_parallel_workers (and/or > > max_worker_processes) value are not high enough to ensure each > > autovacuum workers can launch autovacuum_max_parallel_workers, an > > autovacuum on the very large table might not be able to launch the > > full workers in case where some parallel workers are already being > > used by others (e.g., another autovacuum on a different > > slightly-smaller table etc.). But I'm not sure that the opt-out style > > can handle these cases. Even if there are two huge tables and users > > set parallel_vacuum_workers to both tables, there is no guarantee that > > autovacuums on these tables can use the full workers, as long as > > max_parallel_workers value is not enough. > > > > I guess you mean the "opt-in" style here?
Oops, yes. I wanted it to mean "opt-in" style. > > Sure, even opt-in style doesn't give us an unbreakable guarantee that huge > tables will be processed with the desired number of parallel workers. But IMHO > "opt-in" greatly increases the probability of this. Cost-based vacuum delay parameters shared between the autovacuum leader and its parallel workers. > Searching for arguments in > favor of opt-in style, I asked for help from another person who has been > managing the setup of highload systems for decades. He promised to share his > opinion next week. Given that we have one and half weeks before the feature freeze, I think it's better to complete the project first before waiting for his/her comments next week. Even if we finish this feature with the opt-out style, we can hear more opinions on it and change the default behavior as the change would be privial. What do you think? I've squashed all patches except for the documentation patch as I assume you're working on it. The attached fixup patch contains several changes: using opt-out style, comment improvements, and fixing typos etc. Regards, -- Masahiko Sawada Amazon Web Services: https://aws.amazon.com
From 98e63807d9dbbf2d6153ce4b8139a49f84339a07 Mon Sep 17 00:00:00 2001 From: Masahiko Sawada <[email protected]> Date: Wed, 25 Mar 2026 14:49:12 -0700 Subject: [PATCH v31 2/2] fixup: several changes. - use opt-out style. - adjust default values. - improve comments. - fixes typos etc. --- src/backend/access/common/reloptions.c | 2 +- src/backend/access/heap/vacuumlazy.c | 5 +- src/backend/commands/vacuumparallel.c | 52 +++++++++++++------ src/backend/postmaster/autovacuum.c | 36 +++++++------ src/backend/utils/init/globals.c | 2 +- src/backend/utils/misc/guc.c | 7 +-- src/backend/utils/misc/guc_parameters.dat | 2 +- src/backend/utils/misc/postgresql.conf.sample | 2 +- .../t/001_parallel_autovacuum.pl | 22 ++++---- 9 files changed, 82 insertions(+), 48 deletions(-) diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index ce41b015b32..cee705500f8 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -243,7 +243,7 @@ static relopt_int intRelOpts[] = RELOPT_KIND_HEAP, ShareUpdateExclusiveLock }, - 0, -1, 1024 + -1, -1, 1024 }, { { diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index 8c7de657976..9fd4f6febbe 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -864,8 +864,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params, dead_items_alloc(vacrel, params.nworkers); #ifdef USE_INJECTION_POINTS + /* - * Trigger injection point, if parallel autovacuum is about to be started. + * Used by tests to pause before parallel vacuum is launched, allowing + * test code to modify configuration that the leader then propagates to + * workers. */ if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel)) INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL); diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c index 62b6f50b538..13544de5b93 100644 --- a/src/backend/commands/vacuumparallel.c +++ b/src/backend/commands/vacuumparallel.c @@ -69,10 +69,12 @@ typedef struct PVSharedCostParams { /* * The generation counter is incremented by the leader process each time - * it updates the shared cost-based vacuum delay parameters. Paralell + * it updates the shared cost-based vacuum delay parameters. Parallel * vacuum workers compares it with their local generation, * shared_params_generation_local, to detect whether they need to refresh - * their local parameters. + * their local parameters. The generation starts from 1 so that a freshly + * started worker (whose local copy is 0) will always load the initial + * parameters on its first check. */ pg_atomic_uint32 generation; @@ -158,13 +160,13 @@ typedef struct PVShared /* * If 'true' then we are running parallel autovacuum. Otherwise, we are - * running parallel maintenence VACUUM. + * running parallel maintenance VACUUM. */ bool is_autovacuum; /* - * Struct for syncing cost-based vacuum delay parameters between - * supportive parallel autovacuum workers with leader worker. + * Cost-based vacuum delay parameters shared between the autovacuum leader + * and its parallel workers. */ PVSharedCostParams cost_params; } PVShared; @@ -271,7 +273,13 @@ struct ParallelVacuumState static PVSharedCostParams *pv_shared_cost_params = NULL; -/* See comments in the PVSharedCostParams for the details */ +/* + * Worker-local copy of the last cost-parameter generation this worker has + * applied. Initialized to 0; since the leader initializes the shared + * generation counter to 1, the first call to + * parallel_vacuum_update_shared_delay_params() will always detect a + * mismatch and read the initial parameters from shared memory. + */ static uint32 shared_params_generation_local = 0; static int parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested, @@ -455,7 +463,7 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes, if (shared->is_autovacuum) { parallel_vacuum_set_cost_parameters(&shared->cost_params); - pg_atomic_init_u32(&shared->cost_params.generation, 0); + pg_atomic_init_u32(&shared->cost_params.generation, 1); SpinLockInit(&shared->cost_params.mutex); pv_shared_cost_params = &(shared->cost_params); @@ -623,7 +631,7 @@ parallel_vacuum_set_cost_parameters(PVSharedCostParams *params) * Updates the cost-based vacuum delay parameters for parallel autovacuum * workers. * - * For non-autovacuum parallel worker this function will have no effect. + * For non-autovacuum parallel workers, this function will have no effect. */ void parallel_vacuum_update_shared_delay_params(void) @@ -632,7 +640,7 @@ parallel_vacuum_update_shared_delay_params(void) Assert(IsParallelWorker()); - /* Quick return if the wokrer is not running for the autovacuum */ + /* Quick return if the worker is not running for the autovacuum */ if (pv_shared_cost_params == NULL) return; @@ -681,7 +689,7 @@ parallel_vacuum_propagate_shared_delay_params(void) return; /* - * Check if any delay parameters has changed. We can read them without + * Check if any delay parameters have changed. We can read them without * locks as only the leader can modify them. */ if (vacuum_cost_delay == pv_shared_cost_params->cost_delay && @@ -905,9 +913,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan } #ifdef USE_INJECTION_POINTS + /* - * This injection point is used to wait until parallel autovacuum workers - * finishes their part of index processing. + * Used by tests to pause after workers are launched but before index + * vacuuming begins. */ if (nworkers > 0) INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL); @@ -1247,15 +1256,26 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc) shared->dead_items_handle); /* Set cost-based vacuum delay */ - VacuumUpdateCosts(); + if (shared->is_autovacuum) + { + /* + * Parallel autovacuum workers initialize cost-based delay parameters + * from the leader's shared state rather than GUC defaults, because + * the leader may have applied per-table or autovacuum-specific + * overrides. pv_shared_cost_params must be set before calling + * parallel_vacuum_update_shared_delay_params(). + */ + pv_shared_cost_params = &(shared->cost_params); + parallel_vacuum_update_shared_delay_params(); + } + else + VacuumUpdateCosts(); + VacuumCostBalance = 0; VacuumCostBalanceLocal = 0; VacuumSharedCostBalance = &(shared->cost_balance); VacuumActiveNWorkers = &(shared->active_nworkers); - if (shared->is_autovacuum) - pv_shared_cost_params = &(shared->cost_params); - /* Set parallel vacuum state */ pvs.indrels = indrels; pvs.nindexes = nindexes; diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 562514e2ece..ce893db1ab5 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -2797,7 +2797,6 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map, int multixact_freeze_table_age; int log_vacuum_min_duration; int log_analyze_min_duration; - int nparallel_workers = -1; /* disabled by default */ /* * Calculate the vacuum cost parameters and the freeze ages. If there @@ -2858,19 +2857,6 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map, tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED; tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED; - /* Decide whether we need to process indexes of table in parallel. */ - if (avopts) - { - if (avopts->autovacuum_parallel_workers > 0) - nparallel_workers = avopts->autovacuum_parallel_workers; - else if (avopts->autovacuum_parallel_workers == -1) - { - nparallel_workers = autovacuum_max_parallel_workers > 0 - ? autovacuum_max_parallel_workers - : -1; /* disable parallelism if parameter's value is 0 */ - } - } - tab->at_params.freeze_min_age = freeze_min_age; tab->at_params.freeze_table_age = freeze_table_age; tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age; @@ -2879,7 +2865,27 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map, tab->at_params.log_vacuum_min_duration = log_vacuum_min_duration; tab->at_params.log_analyze_min_duration = log_analyze_min_duration; tab->at_params.toast_parent = InvalidOid; - tab->at_params.nworkers = nparallel_workers; + + /* Determine the number of parallel vacuum workers to use */ + tab->at_params.nworkers = 0; + if (avopts) + { + if (avopts->autovacuum_parallel_workers == 0) + { + /* + * Disable parallel vacuum, if the reloption sets the parallel + * degree as zero. + */ + tab->at_params.nworkers = -1; + } + else if (avopts->autovacuum_parallel_workers > 0) + tab->at_params.nworkers = avopts->autovacuum_parallel_workers; + + /* + * autovacuum_parallel_workers == -1 falls through, keep + * nworkers=0 + */ + } /* * Later, in vacuum_rel(), we check reloptions for any diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c index 8265a82b639..24ddb276f0c 100644 --- a/src/backend/utils/init/globals.c +++ b/src/backend/utils/init/globals.c @@ -143,7 +143,7 @@ int NBuffers = 16384; int MaxConnections = 100; int max_worker_processes = 8; int max_parallel_workers = 8; -int autovacuum_max_parallel_workers = 2; +int autovacuum_max_parallel_workers = 0; int MaxBackends = 0; /* GUC parameters for vacuum */ diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 45b39b7c47f..1ac8e8fc3be 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -3359,9 +3359,10 @@ set_config_with_handle(const char *name, config_handle *handle, * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL. * * Other changes might need to affect other workers, so forbid them. Note, - * that parallel autovacuum leader is an exception, because only - * cost-based delays need to be affected also to parallel autovacuum - * workers, and we will handle it elsewhere if appropriate. + * that parallel autovacuum leader is an exception because only cost-based + * delays need to be affected also to parallel autovacuum workers. These + * parameters are propagated to its workers during parallel vacuum (see + * vacuumparallel.c for details). */ if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal && action != GUC_ACTION_SAVE && diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat index 3d2fd35a004..275198f2023 100644 --- a/src/backend/utils/misc/guc_parameters.dat +++ b/src/backend/utils/misc/guc_parameters.dat @@ -157,7 +157,7 @@ { name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM', short_desc => 'Maximum number of parallel workers that can be used by a single autovacuum worker.', variable => 'autovacuum_max_parallel_workers', - boot_val => '2', + boot_val => '0', min => '0', max => 'MAX_PARALLEL_WORKER_LIMIT', }, diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 11d96f4dd4f..9853df0bdf7 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -713,7 +713,7 @@ #autovacuum_worker_slots = 16 # autovacuum worker slots to allocate # (change requires restart) #autovacuum_max_workers = 3 # max number of autovacuum subprocesses -#autovacuum_max_parallel_workers = 2 # limited by max_parallel_workers +#autovacuum_max_parallel_workers = 0 # limited by max_parallel_workers #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 50 # min number of row updates before # vacuum diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl index 0364019d5f0..2aca32374a2 100644 --- a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl +++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl @@ -12,7 +12,7 @@ if ($ENV{enable_injection_points} ne 'yes') # Before each test we should disable autovacuum for 'test_autovac' table and # generate some dead tuples in it. Returns the current autovacuum_count of -# the table tset_autovac. +# the table test_autovac. sub prepare_for_next_test { my ($node, $test_number) = @_; @@ -47,16 +47,20 @@ my $psql_out; my $node = PostgreSQL::Test::Cluster->new('main'); $node->init; -# Configure postgres, so it can launch parallel autovacuum workers, log all -# information we are interested in and autovacuum works frequently +# Limit to one autovacuum worker and disable autovacuum logging globally +# (enabled only on the test table) so that log checks below match only +# activity on the expected table. $node->append_conf( 'postgresql.conf', qq{ - max_worker_processes = 20 - max_parallel_workers = 20 - autovacuum_max_parallel_workers = 4 - log_min_messages = debug2 - autovacuum_naptime = '1s' - min_parallel_index_scan_size = 0 +autovacuum_max_workers = 1 +autovacuum_worker_slots = 1 +autovacuum_max_parallel_workers = 2 +max_worker_processes = 10 +max_parallel_workers = 10 +log_min_messages = debug2 +autovacuum_naptime = '1s' +min_parallel_index_scan_size = 0 +log_autovacuum_min_duration = -1 }); $node->start; -- 2.53.0
From 493070daf550b5b7931d21d4b5661ec3b466f51b Mon Sep 17 00:00:00 2001 From: Daniil Davidov <[email protected]> Date: Tue, 17 Mar 2026 02:18:09 +0700 Subject: [PATCH v31 1/2] Parallel autovacuum --- src/backend/access/common/reloptions.c | 11 + src/backend/access/heap/vacuumlazy.c | 9 + src/backend/commands/vacuum.c | 21 +- src/backend/commands/vacuumparallel.c | 201 +++++++++++++++++- src/backend/postmaster/autovacuum.c | 20 +- src/backend/utils/init/globals.c | 1 + src/backend/utils/misc/guc.c | 8 +- src/backend/utils/misc/guc_parameters.dat | 8 + src/backend/utils/misc/postgresql.conf.sample | 1 + src/bin/psql/tab-complete.in.c | 1 + src/include/commands/vacuum.h | 2 + src/include/miscadmin.h | 1 + src/include/utils/rel.h | 2 + src/test/modules/Makefile | 1 + src/test/modules/meson.build | 1 + src/test/modules/test_autovacuum/.gitignore | 2 + src/test/modules/test_autovacuum/Makefile | 20 ++ src/test/modules/test_autovacuum/meson.build | 15 ++ .../t/001_parallel_autovacuum.pl | 191 +++++++++++++++++ src/tools/pgindent/typedefs.list | 1 + 20 files changed, 504 insertions(+), 13 deletions(-) create mode 100644 src/test/modules/test_autovacuum/.gitignore create mode 100644 src/test/modules/test_autovacuum/Makefile create mode 100644 src/test/modules/test_autovacuum/meson.build create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c index a6002ae9b07..ce41b015b32 100644 --- a/src/backend/access/common/reloptions.c +++ b/src/backend/access/common/reloptions.c @@ -236,6 +236,15 @@ static relopt_int intRelOpts[] = }, SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100 }, + { + { + "autovacuum_parallel_workers", + "Overrides value of the autovacuum_max_parallel_workers parameter for this table, if > -1.", + RELOPT_KIND_HEAP, + ShareUpdateExclusiveLock + }, + 0, -1, 1024 + }, { { "autovacuum_vacuum_threshold", @@ -1969,6 +1978,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind) {"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)}, {"autovacuum_enabled", RELOPT_TYPE_BOOL, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)}, + {"autovacuum_parallel_workers", RELOPT_TYPE_INT, + offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)}, {"autovacuum_vacuum_threshold", RELOPT_TYPE_INT, offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)}, {"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT, diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c index f698c2d899b..8c7de657976 100644 --- a/src/backend/access/heap/vacuumlazy.c +++ b/src/backend/access/heap/vacuumlazy.c @@ -152,6 +152,7 @@ #include "storage/latch.h" #include "storage/lmgr.h" #include "storage/read_stream.h" +#include "utils/injection_point.h" #include "utils/lsyscache.h" #include "utils/pg_rusage.h" #include "utils/timestamp.h" @@ -862,6 +863,14 @@ heap_vacuum_rel(Relation rel, const VacuumParams params, lazy_check_wraparound_failsafe(vacrel); dead_items_alloc(vacrel, params.nworkers); +#ifdef USE_INJECTION_POINTS + /* + * Trigger injection point, if parallel autovacuum is about to be started. + */ + if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel)) + INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL); +#endif + /* * Call lazy_scan_heap to perform all required heap pruning, index * vacuuming, and heap vacuuming (plus related processing) diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c index bce3a2daa24..1b5ba3ce1ef 100644 --- a/src/backend/commands/vacuum.c +++ b/src/backend/commands/vacuum.c @@ -2435,8 +2435,19 @@ vacuum_delay_point(bool is_analyze) /* Always check for interrupts */ CHECK_FOR_INTERRUPTS(); - if (InterruptPending || - (!VacuumCostActive && !ConfigReloadPending)) + if (InterruptPending) + return; + + if (IsParallelWorker()) + { + /* + * Update cost-based vacuum delay parameters for a parallel autovacuum + * worker if any changes are detected. + */ + parallel_vacuum_update_shared_delay_params(); + } + + if (!VacuumCostActive && !ConfigReloadPending) return; /* @@ -2450,6 +2461,12 @@ vacuum_delay_point(bool is_analyze) ConfigReloadPending = false; ProcessConfigFile(PGC_SIGHUP); VacuumUpdateCosts(); + + /* + * Propagate cost-based vacuum delay parameters to shared memory if + * any of them have changed during the config reload. + */ + parallel_vacuum_propagate_shared_delay_params(); } /* diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c index 77834b96a21..62b6f50b538 100644 --- a/src/backend/commands/vacuumparallel.c +++ b/src/backend/commands/vacuumparallel.c @@ -1,7 +1,9 @@ /*------------------------------------------------------------------------- * * vacuumparallel.c - * Support routines for parallel vacuum execution. + * Support routines for parallel vacuum and autovacuum execution. In the + * comments below, the word "vacuum" will refer to both vacuum and + * autovacuum. * * This file contains routines that are intended to support setting up, using, * and tearing down a ParallelVacuumState. @@ -16,6 +18,13 @@ * the parallel context is re-initialized so that the same DSM can be used for * multiple passes of index bulk-deletion and index cleanup. * + * For parallel autovacuum, we need to propagate cost-based vacuum delay + * parameters from the leader to its workers, as the leader's parameters can + * change even while processing a table (e.g., due to a config reload). + * The PVSharedCostParams struct manages these parameters using a + * generation counter. Each parallel worker polls this shared state and + * refreshes its local delay parameters whenever a change is detected. + * * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * @@ -37,6 +46,7 @@ #include "storage/bufmgr.h" #include "storage/proc.h" #include "tcop/tcopprot.h" +#include "utils/injection_point.h" #include "utils/lsyscache.h" #include "utils/rel.h" @@ -51,6 +61,31 @@ #define PARALLEL_VACUUM_KEY_WAL_USAGE 4 #define PARALLEL_VACUUM_KEY_INDEX_STATS 5 +/* + * Struct for cost-based vacuum delay related parameters to share among an + * autovacuum worker and its parallel vacuum workers. + */ +typedef struct PVSharedCostParams +{ + /* + * The generation counter is incremented by the leader process each time + * it updates the shared cost-based vacuum delay parameters. Paralell + * vacuum workers compares it with their local generation, + * shared_params_generation_local, to detect whether they need to refresh + * their local parameters. + */ + pg_atomic_uint32 generation; + + slock_t mutex; /* protects all fields below */ + + /* Parameters to share with parallel workers */ + double cost_delay; + int cost_limit; + int cost_page_dirty; + int cost_page_hit; + int cost_page_miss; +} PVSharedCostParams; + /* * Shared information among parallel workers. So this is allocated in the DSM * segment. @@ -120,6 +155,18 @@ typedef struct PVShared /* Statistics of shared dead items */ VacDeadItemsInfo dead_items_info; + + /* + * If 'true' then we are running parallel autovacuum. Otherwise, we are + * running parallel maintenence VACUUM. + */ + bool is_autovacuum; + + /* + * Struct for syncing cost-based vacuum delay parameters between + * supportive parallel autovacuum workers with leader worker. + */ + PVSharedCostParams cost_params; } PVShared; /* Status used during parallel index vacuum or cleanup */ @@ -222,6 +269,11 @@ struct ParallelVacuumState PVIndVacStatus status; }; +static PVSharedCostParams *pv_shared_cost_params = NULL; + +/* See comments in the PVSharedCostParams for the details */ +static uint32 shared_params_generation_local = 0; + static int parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested, bool *will_parallel_vacuum); static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans, @@ -233,6 +285,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans, bool vacuum); static void parallel_vacuum_error_callback(void *arg); +static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params); /* * Try to enter parallel mode and create a parallel context. Then initialize @@ -374,8 +427,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes, shared->queryid = pgstat_get_my_query_id(); shared->maintenance_work_mem_worker = (nindexes_mwm > 0) ? - maintenance_work_mem / Min(parallel_workers, nindexes_mwm) : - maintenance_work_mem; + vac_work_mem / Min(parallel_workers, nindexes_mwm) : + vac_work_mem; + shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024; /* Prepare DSA space for dead items */ @@ -392,6 +446,21 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes, pg_atomic_init_u32(&(shared->active_nworkers), 0); pg_atomic_init_u32(&(shared->idx), 0); + shared->is_autovacuum = AmAutoVacuumWorkerProcess(); + + /* + * Initialize shared cost-based vacuum delay parameters if it's for + * autovacuum. + */ + if (shared->is_autovacuum) + { + parallel_vacuum_set_cost_parameters(&shared->cost_params); + pg_atomic_init_u32(&shared->cost_params.generation, 0); + SpinLockInit(&shared->cost_params.mutex); + + pv_shared_cost_params = &(shared->cost_params); + } + shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared); pvs->shared = shared; @@ -457,6 +526,9 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats) DestroyParallelContext(pvs->pcxt); ExitParallelMode(); + if (AmAutoVacuumWorkerProcess()) + pv_shared_cost_params = NULL; + pfree(pvs->will_parallel_vacuum); pfree(pvs); } @@ -534,6 +606,103 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wstats); } +/* + * Fill in the given structure with cost-based vacuum delay parameter values. + */ +static inline void +parallel_vacuum_set_cost_parameters(PVSharedCostParams *params) +{ + params->cost_delay = vacuum_cost_delay; + params->cost_limit = vacuum_cost_limit; + params->cost_page_dirty = VacuumCostPageDirty; + params->cost_page_hit = VacuumCostPageHit; + params->cost_page_miss = VacuumCostPageMiss; +} + +/* + * Updates the cost-based vacuum delay parameters for parallel autovacuum + * workers. + * + * For non-autovacuum parallel worker this function will have no effect. + */ +void +parallel_vacuum_update_shared_delay_params(void) +{ + uint32 params_generation; + + Assert(IsParallelWorker()); + + /* Quick return if the wokrer is not running for the autovacuum */ + if (pv_shared_cost_params == NULL) + return; + + params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation); + Assert(shared_params_generation_local <= params_generation); + + /* Return if parameters had not changed in the leader */ + if (params_generation == shared_params_generation_local) + return; + + SpinLockAcquire(&pv_shared_cost_params->mutex); + VacuumCostDelay = pv_shared_cost_params->cost_delay; + VacuumCostLimit = pv_shared_cost_params->cost_limit; + VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty; + VacuumCostPageHit = pv_shared_cost_params->cost_page_hit; + VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss; + SpinLockRelease(&pv_shared_cost_params->mutex); + + VacuumUpdateCosts(); + + shared_params_generation_local = params_generation; + + elog(DEBUG2, + "parallel autovacuum worker updated cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d", + vacuum_cost_limit, + vacuum_cost_delay, + VacuumCostPageMiss, + VacuumCostPageDirty, + VacuumCostPageHit); +} + +/* + * Store the cost-based vacuum delay parameters in the shared memory so that + * parallel vacuum workers can consume them (see + * parallel_vacuum_update_shared_delay_params()). + */ +void +parallel_vacuum_propagate_shared_delay_params(void) +{ + Assert(AmAutoVacuumWorkerProcess()); + + /* + * Quick return if the leader process is not sharing the delay parameters. + */ + if (pv_shared_cost_params == NULL) + return; + + /* + * Check if any delay parameters has changed. We can read them without + * locks as only the leader can modify them. + */ + if (vacuum_cost_delay == pv_shared_cost_params->cost_delay && + vacuum_cost_limit == pv_shared_cost_params->cost_limit && + VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty && + VacuumCostPageHit == pv_shared_cost_params->cost_page_hit && + VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss) + return; + + /* Update the shared delay parameters */ + SpinLockAcquire(&pv_shared_cost_params->mutex); + parallel_vacuum_set_cost_parameters(pv_shared_cost_params); + SpinLockRelease(&pv_shared_cost_params->mutex); + + /* + * Increment the generation of the parameters, i.e. let parallel workers + * know that they should re-read shared cost params. + */ + pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1); +} + /* * Compute the number of parallel worker processes to request. Both index * vacuum and index cleanup can be executed with parallel workers. @@ -555,12 +724,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested, int nindexes_parallel_bulkdel = 0; int nindexes_parallel_cleanup = 0; int parallel_workers; + int max_workers; + + max_workers = AmAutoVacuumWorkerProcess() ? + autovacuum_max_parallel_workers : + max_parallel_maintenance_workers; /* * We don't allow performing parallel operation in standalone backend or * when parallelism is disabled. */ - if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0) + if (!IsUnderPostmaster || max_workers == 0) return 0; /* @@ -599,8 +773,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested, parallel_workers = (nrequested > 0) ? Min(nrequested, nindexes_parallel) : nindexes_parallel; - /* Cap by max_parallel_maintenance_workers */ - parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers); + /* Cap by GUC variable */ + parallel_workers = Min(parallel_workers, max_workers); return parallel_workers; } @@ -730,6 +904,15 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan pvs->pcxt->nworkers_launched, nworkers))); } +#ifdef USE_INJECTION_POINTS + /* + * This injection point is used to wait until parallel autovacuum workers + * finishes their part of index processing. + */ + if (nworkers > 0) + INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL); +#endif + /* Vacuum the indexes that can be processed by only leader process */ parallel_vacuum_process_unsafe_indexes(pvs); @@ -1070,6 +1253,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc) VacuumSharedCostBalance = &(shared->cost_balance); VacuumActiveNWorkers = &(shared->active_nworkers); + if (shared->is_autovacuum) + pv_shared_cost_params = &(shared->cost_params); + /* Set parallel vacuum state */ pvs.indrels = indrels; pvs.nindexes = nindexes; @@ -1119,6 +1305,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc) vac_close_indexes(nindexes, indrels, RowExclusiveLock); table_close(rel, ShareUpdateExclusiveLock); FreeAccessStrategy(pvs.bstrategy); + + if (shared->is_autovacuum) + pv_shared_cost_params = NULL; } /* diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c index 7ecb069c248..562514e2ece 100644 --- a/src/backend/postmaster/autovacuum.c +++ b/src/backend/postmaster/autovacuum.c @@ -1658,7 +1658,7 @@ VacuumUpdateCosts(void) } else { - /* Must be explicit VACUUM or ANALYZE */ + /* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */ vacuum_cost_delay = VacuumCostDelay; vacuum_cost_limit = VacuumCostLimit; } @@ -2797,6 +2797,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map, int multixact_freeze_table_age; int log_vacuum_min_duration; int log_analyze_min_duration; + int nparallel_workers = -1; /* disabled by default */ /* * Calculate the vacuum cost parameters and the freeze ages. If there @@ -2856,8 +2857,20 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map, */ tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED; tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED; - /* As of now, we don't support parallel vacuum for autovacuum */ - tab->at_params.nworkers = -1; + + /* Decide whether we need to process indexes of table in parallel. */ + if (avopts) + { + if (avopts->autovacuum_parallel_workers > 0) + nparallel_workers = avopts->autovacuum_parallel_workers; + else if (avopts->autovacuum_parallel_workers == -1) + { + nparallel_workers = autovacuum_max_parallel_workers > 0 + ? autovacuum_max_parallel_workers + : -1; /* disable parallelism if parameter's value is 0 */ + } + } + tab->at_params.freeze_min_age = freeze_min_age; tab->at_params.freeze_table_age = freeze_table_age; tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age; @@ -2866,6 +2879,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map, tab->at_params.log_vacuum_min_duration = log_vacuum_min_duration; tab->at_params.log_analyze_min_duration = log_analyze_min_duration; tab->at_params.toast_parent = InvalidOid; + tab->at_params.nworkers = nparallel_workers; /* * Later, in vacuum_rel(), we check reloptions for any diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c index 36ad708b360..8265a82b639 100644 --- a/src/backend/utils/init/globals.c +++ b/src/backend/utils/init/globals.c @@ -143,6 +143,7 @@ int NBuffers = 16384; int MaxConnections = 100; int max_worker_processes = 8; int max_parallel_workers = 8; +int autovacuum_max_parallel_workers = 2; int MaxBackends = 0; /* GUC parameters for vacuum */ diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index e1546d9c97a..45b39b7c47f 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -3358,9 +3358,13 @@ set_config_with_handle(const char *name, config_handle *handle, * * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL. * - * Other changes might need to affect other workers, so forbid them. + * Other changes might need to affect other workers, so forbid them. Note, + * that parallel autovacuum leader is an exception, because only + * cost-based delays need to be affected also to parallel autovacuum + * workers, and we will handle it elsewhere if appropriate. */ - if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE && + if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal && + action != GUC_ACTION_SAVE && (record->flags & GUC_ALLOW_IN_PARALLEL) == 0) { ereport(elevel, diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat index 0c9854ad8fc..3d2fd35a004 100644 --- a/src/backend/utils/misc/guc_parameters.dat +++ b/src/backend/utils/misc/guc_parameters.dat @@ -154,6 +154,14 @@ max => '2000000000', }, +{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM', + short_desc => 'Maximum number of parallel workers that can be used by a single autovacuum worker.', + variable => 'autovacuum_max_parallel_workers', + boot_val => '2', + min => '0', + max => 'MAX_PARALLEL_WORKER_LIMIT', +}, + { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM', short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.', variable => 'autovacuum_max_workers', diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index e4abe6c0077..11d96f4dd4f 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -713,6 +713,7 @@ #autovacuum_worker_slots = 16 # autovacuum worker slots to allocate # (change requires restart) #autovacuum_max_workers = 3 # max number of autovacuum subprocesses +#autovacuum_max_parallel_workers = 2 # limited by max_parallel_workers #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 50 # min number of row updates before # vacuum diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c index 523d3f39fc5..f6bf072bab5 100644 --- a/src/bin/psql/tab-complete.in.c +++ b/src/bin/psql/tab-complete.in.c @@ -1432,6 +1432,7 @@ static const char *const table_storage_parameters[] = { "autovacuum_multixact_freeze_max_age", "autovacuum_multixact_freeze_min_age", "autovacuum_multixact_freeze_table_age", + "autovacuum_parallel_workers", "autovacuum_vacuum_cost_delay", "autovacuum_vacuum_cost_limit", "autovacuum_vacuum_insert_scale_factor", diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h index 1f45bca015c..8b42808e70b 100644 --- a/src/include/commands/vacuum.h +++ b/src/include/commands/vacuum.h @@ -422,6 +422,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, int num_index_scans, bool estimated_count, PVWorkerStats *wstats); +extern void parallel_vacuum_update_shared_delay_params(void); +extern void parallel_vacuum_propagate_shared_delay_params(void); extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc); /* in commands/analyze.c */ diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index f16f35659b9..00190c67ecf 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends; extern PGDLLIMPORT int MaxConnections; extern PGDLLIMPORT int max_worker_processes; extern PGDLLIMPORT int max_parallel_workers; +extern PGDLLIMPORT int autovacuum_max_parallel_workers; extern PGDLLIMPORT int commit_timestamp_buffers; extern PGDLLIMPORT int multixact_member_buffers; diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h index 236830f6b93..cd1e92f2302 100644 --- a/src/include/utils/rel.h +++ b/src/include/utils/rel.h @@ -311,6 +311,8 @@ typedef struct ForeignKeyCacheInfo typedef struct AutoVacOpts { bool enabled; + + int autovacuum_parallel_workers; int vacuum_threshold; int vacuum_max_threshold; int vacuum_ins_threshold; diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile index 28ce3b35eda..336a212faf4 100644 --- a/src/test/modules/Makefile +++ b/src/test/modules/Makefile @@ -16,6 +16,7 @@ SUBDIRS = \ plsample \ spgist_name_ops \ test_aio \ + test_autovacuum \ test_binaryheap \ test_bitmapset \ test_bloomfilter \ diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build index 3ac291656c1..929659956cb 100644 --- a/src/test/modules/meson.build +++ b/src/test/modules/meson.build @@ -16,6 +16,7 @@ subdir('plsample') subdir('spgist_name_ops') subdir('ssl_passphrase_callback') subdir('test_aio') +subdir('test_autovacuum') subdir('test_binaryheap') subdir('test_bitmapset') subdir('test_bloomfilter') diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore new file mode 100644 index 00000000000..716e17f5a2a --- /dev/null +++ b/src/test/modules/test_autovacuum/.gitignore @@ -0,0 +1,2 @@ +# Generated subdirectories +/tmp_check/ diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile new file mode 100644 index 00000000000..188ec9f96a2 --- /dev/null +++ b/src/test/modules/test_autovacuum/Makefile @@ -0,0 +1,20 @@ +# src/test/modules/test_autovacuum/Makefile + +PGFILEDESC = "test_autovacuum - test code for parallel autovacuum" + +TAP_TESTS = 1 + +EXTRA_INSTALL = src/test/modules/injection_points + +export enable_injection_points + +ifdef USE_PGXS +PG_CONFIG = pg_config +PGXS := $(shell $(PG_CONFIG) --pgxs) +include $(PGXS) +else +subdir = src/test/modules/test_autovacuum +top_builddir = ../../../.. +include $(top_builddir)/src/Makefile.global +include $(top_srcdir)/contrib/contrib-global.mk +endif diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build new file mode 100644 index 00000000000..86e392bc0de --- /dev/null +++ b/src/test/modules/test_autovacuum/meson.build @@ -0,0 +1,15 @@ +# Copyright (c) 2024-2026, PostgreSQL Global Development Group + +tests += { + 'name': 'test_autovacuum', + 'sd': meson.current_source_dir(), + 'bd': meson.current_build_dir(), + 'tap': { + 'env': { + 'enable_injection_points': get_option('injection_points') ? 'yes' : 'no', + }, + 'tests': [ + 't/001_parallel_autovacuum.pl', + ], + }, +} diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl new file mode 100644 index 00000000000..0364019d5f0 --- /dev/null +++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl @@ -0,0 +1,191 @@ +# Test parallel autovacuum behavior + +use warnings FATAL => 'all'; +use PostgreSQL::Test::Cluster; +use PostgreSQL::Test::Utils; +use Test::More; + +if ($ENV{enable_injection_points} ne 'yes') +{ + plan skip_all => 'Injection points not supported by this build'; +} + +# Before each test we should disable autovacuum for 'test_autovac' table and +# generate some dead tuples in it. Returns the current autovacuum_count of +# the table tset_autovac. +sub prepare_for_next_test +{ + my ($node, $test_number) = @_; + + $node->safe_psql( + 'postgres', qq{ + ALTER TABLE test_autovac SET (autovacuum_enabled = false); + UPDATE test_autovac SET col_1 = $test_number; + }); + + my $count = $node->safe_psql( + 'postgres', qq{ + SELECT autovacuum_count FROM pg_stat_user_tables WHERE relname = 'test_autovac' + }); + + return $count; +} + +# Wait for the table to be vacuumed by an autovacuum worker. +sub wait_for_autovacuum_complete +{ + my ($node, $old_count) = @_; + + $node->poll_query_until( + 'postgres', qq{ + SELECT autovacuum_count > $old_count FROM pg_stat_user_tables WHERE relname = 'test_autovac' + }); +} + +my $psql_out; + +my $node = PostgreSQL::Test::Cluster->new('main'); +$node->init; + +# Configure postgres, so it can launch parallel autovacuum workers, log all +# information we are interested in and autovacuum works frequently +$node->append_conf( + 'postgresql.conf', qq{ + max_worker_processes = 20 + max_parallel_workers = 20 + autovacuum_max_parallel_workers = 4 + log_min_messages = debug2 + autovacuum_naptime = '1s' + min_parallel_index_scan_size = 0 +}); +$node->start; + +# Check if the extension injection_points is available, as it may be +# possible that this script is run with installcheck, where the module +# would not be installed by default. +if (!$node->check_extension('injection_points')) +{ + plan skip_all => 'Extension injection_points not installed'; +} + +# Create all functions needed for testing +$node->safe_psql( + 'postgres', qq{ + CREATE EXTENSION injection_points; +}); + +my $indexes_num = 3; +my $initial_rows_num = 10_000; +my $autovacuum_parallel_workers = 2; + +# Create table and fill it with some data +$node->safe_psql( + 'postgres', qq{ + CREATE TABLE test_autovac ( + id SERIAL PRIMARY KEY, + col_1 INTEGER, col_2 INTEGER, col_3 INTEGER, col_4 INTEGER + ) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers, + log_autovacuum_min_duration = 0); + + INSERT INTO test_autovac + SELECT + g AS col1, + g + 1 AS col2, + g + 2 AS col3, + g + 3 AS col4 + FROM generate_series(1, $initial_rows_num) AS g; +}); + +# Create specified number of b-tree indexes on the table +$node->safe_psql( + 'postgres', qq{ + DO \$\$ + DECLARE + i INTEGER; + BEGIN + FOR i IN 1..$indexes_num LOOP + EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i); + END LOOP; + END \$\$; +}); + +# Test 1 : +# Our table has enough indexes and appropriate reloptions, so autovacuum must +# be able to process it in parallel mode. Just check if it can do it. + +my $av_count = prepare_for_next_test($node, 1); +my $log_offset = -s $node->logfile; + +$node->safe_psql( + 'postgres', qq{ + ALTER TABLE test_autovac SET (autovacuum_enabled = true); +}); + +# Wait until the parallel autovacuum on table is completed. At the same time, +# we check that the required number of parallel workers has been started. +wait_for_autovacuum_complete($node, $av_count); +ok( $node->log_contains( + qr/parallel workers: index vacuum: 2 planned, 2 launched in total/, + $log_offset)); + +# Test 2: +# Check whether parallel autovacuum leader can propagate cost-based parameters +# to the parallel workers. + +$av_count = prepare_for_next_test($node, 2); +$log_offset = -s $node->logfile; + +$node->safe_psql( + 'postgres', qq{ + SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait'); + SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait'); + + ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true); +}); + +# Wait until parallel autovacuum is inited +$node->wait_for_event('autovacuum worker', + 'autovacuum-start-parallel-vacuum'); + +# Update the shared cost-based delay parameters. +$node->safe_psql( + 'postgres', qq{ + ALTER SYSTEM SET vacuum_cost_limit = 500; + ALTER SYSTEM SET vacuum_cost_page_miss = 10; + ALTER SYSTEM SET vacuum_cost_page_dirty = 10; + ALTER SYSTEM SET vacuum_cost_page_hit = 10; + SELECT pg_reload_conf(); +}); + +# Resume the leader process to update the shared parameters during heap scan (i.e. +# vacuum_delay_point() is called) and launch a parallel vacuum worker, but it stops +# before vacuuming indexes due to the injection point. +$node->safe_psql( + 'postgres', qq{ + SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum'); +}); +$node->wait_for_event('autovacuum worker', + 'autovacuum-leader-before-indexes-processing'); + +# Check whether parallel worker successfully updated all parameters during +# index processing +$node->wait_for_log( + qr/parallel autovacuum worker updated cost params: cost_limit=500, cost_delay=2, cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/, + $log_offset); + +$node->safe_psql( + 'postgres', qq{ + SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing'); +}); + +wait_for_autovacuum_complete($node, $av_count); + +# Cleanup +$node->safe_psql( + 'postgres', qq{ + SELECT injection_points_detach('autovacuum-start-parallel-vacuum'); + SELECT injection_points_detach('autovacuum-leader-before-indexes-processing'); +}); + +$node->stop; +done_testing(); diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index dbbec84b222..e3b1cba5289 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -2094,6 +2094,7 @@ PVIndStats PVIndVacStatus PVOID PVShared +PVSharedCostParams PVWorkerUsage PVWorkerStats PX_Alias -- 2.53.0
