Re: Autovacuum and visibility maps

2024-12-03 Thread Adrian Klaver

On 12/3/24 08:32, Tefft, Michael J wrote:
We have some batch queries that had occasionally having degraded 
runtimes: from 2 hours degrading to 16 hours, etc.


Comparing plans from good and bad runs, we saw that the good plans used 
index-only scans on table “x”, while the bad plans used index scans.


Using the pg_visibility utility, we found that all of the 83 partitions 
of table “x” were showing zero blocks where all tuples were visible. We 
ran a VACUUM on the table; the visibility maps are now clean and the 
good plans came back.


Our question is: why did autovacuum not spare us from this?

We are using default autovacuum parameters for all except 
log_autovacuum_min_duration=5000. These partitions are populated by 
processes that do a truncate + a single insert-select.


We see autovacuum failure (failed to get lock) messages, followed by a 
success message, in the log for one of these partitions (the biggest 
one) but even that partition showed zero blocks with all tuples visible.


Are we wrong to expect autovacuum to clean up the visibility map?


I have to believe it is due to this:

https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-SPACE-RECOVERY

"If you have a table whose entire contents are deleted on a periodic 
basis, consider doing it with TRUNCATE rather than using DELETE followed 
by VACUUM. TRUNCATE removes the entire content of the table immediately, 
without requiring a subsequent VACUUM or VACUUM FULL to reclaim the 
now-unused disk space. The disadvantage is that strict MVCC semantics 
are violated."


Combined with this:

https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-INSERT-THRESHOLD

"autovacuum_vacuum_threshold

Specifies the minimum number of updated or deleted tuples needed to 
trigger a VACUUM in any one table. ...


"

I'm going to say the TRUNCATE itself does not trigger an autovacuum. I 
would suggest throwing a manual VACUUM in the table population script.




postgres=# select version();

  version

--

PostgreSQL 14.13 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 
20210514 (Red Hat 8.5.0-22), 64-bit


Thank you,

Mike Tefft



--
Adrian Klaver
adrian.kla...@aklaver.com





Autovacuum and visibility maps

2024-12-03 Thread Tefft, Michael J
We have some batch queries that had occasionally having degraded runtimes: from 
2 hours degrading to 16 hours, etc.

Comparing plans from good and bad runs, we saw that the good plans used 
index-only scans on table "x", while the bad plans used index scans.

Using the pg_visibility utility, we found that all of the 83 partitions of 
table "x" were showing zero blocks where all tuples were visible. We ran a 
VACUUM on the table; the visibility maps are now clean and the good plans came 
back.

Our question is: why did autovacuum not spare us from this?

We are using default autovacuum parameters for all except 
log_autovacuum_min_duration=5000. These partitions are populated by processes 
that do a truncate + a single insert-select.

We see autovacuum failure (failed to get lock) messages, followed by a success 
message, in the log for one of these partitions (the biggest one) but even that 
partition showed zero blocks with all tuples visible.

Are we wrong to expect autovacuum to clean up the visibility map?

postgres=# select version();
 version
--
PostgreSQL 14.13 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 
(Red Hat 8.5.0-22), 64-bit

Thank you,
Mike Tefft


Best Practices for Managing Schema Changes Dynamically with libpq

2024-12-03 Thread Sasmit Utkarsh
Dear PostgreSQL Community Team,

I am working on a project that uses libpq along with C language to interact
with PostgreSQL, and we face challenges with managing schema changes
dynamically in production while avoiding downtime. Specifically, we need
guidance on handling table structure changes/additions without tightly
coupling these changes to application updates.

*Current Approach:*
Schema changes are deployed first, followed by application updates to align
with the new structure.

*Challenges:*
Ensuring application stability during the transitional phase when the
schema and code are not fully in sync.
Handling table structure changes (e.g., adding new columns) dynamically
without requiring immediate code changes.

*Questions:*
Are there recommended best practices for managing such schema changes with
libpq?
How can we efficiently handle table additions/updates while keeping the
application and database in sync dynamically?

I would appreciate any guidance, patterns, or examples that can help us
implement a robust solution.

Thank you for your time and support!

Regards,
Sasmit Utkarsh
+91-7674022625


Re: Best Practices for Managing Schema Changes Dynamically with libpq

2024-12-03 Thread Adrian Klaver

On 12/3/24 09:43, Sasmit Utkarsh wrote:

Dear PostgreSQL Community Team,

I am working on a project that uses libpq along with C language to 
interact with PostgreSQL, and we face challenges with managing schema 
changes dynamically in production while avoiding downtime. Specifically, 
we need guidance on handling table structure changes/additions without 
tightly coupling these changes to application updates.


*Current Approach:*
Schema changes are deployed first, followed by application updates to 
align with the new structure.


*Challenges:*
Ensuring application stability during the transitional phase when the 
schema and code are not fully in sync.
Handling table structure changes (e.g., adding new columns) dynamically 
without requiring immediate code changes.


*Questions:*
Are there recommended best practices for managing such schema changes 
with libpq?


I use Sqitch(https://sqitch.org/). You have to squint but it is libpq, 
of a sort, as it uses psql to do its changes.


How can we efficiently handle table additions/updates while keeping the 
application and database in sync dynamically?


There is way too many variations that enter into the above to give a 
complete concrete answer in anything less then a short book.


My general rule for this is to create a map of the process in outline 
form. Personally I still think better on paper and I pull out a legal 
pad and pencil and write out a work flow that goes from where I am to 
where I want to be. This starts with the 1 foot view that I then 
drill down in to get the specific actions. I use a pencil as the drill 
down process often uncovers flaws in the 1 foot view. This by the 
way was a method my 7th grade math teacher taught the class back way 
back when.




I would appreciate any guidance, patterns, or examples that can help us 
implement a robust solution.


Thank you for your time and support!

Regards,
Sasmit Utkarsh
+91-7674022625


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Autovacuum and visibility maps

2024-12-03 Thread Adrian Klaver




On 12/3/24 10:11 AM, Tefft, Michael J wrote:

Thanks for the point about truncates versus deletes.

But most of these partitions have over 100k rows, all inserted at once. 
We have the default setting:


#autovacuum_vacuum_insert_threshold = 1000  # min number of row inserts

So I thought we should be triggering by inserts.


From your OP I took the following literally:

"... a single insert-select".

Take a look at the stat table below:

https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW

pg_stat_all_tables

For given table and see what the *autovacuum* fields return.

You can use the function below to see if there are per table settings 
that are overriding the postgresql.conf settings.


https://www.postgresql.org/docs/current/functions-info.html

pg_options_to_table()

Something like:

select pg_options_to_table(reloptions) from pg_class where relname = 
'some_table';




Mike

*From:*Adrian Klaver 
*Sent:* Tuesday, December 3, 2024 11:57 AM
*To:* Tefft, Michael J ; 
pgsql-general@lists.postgresql.org

*Subject:* Re: Autovacuum and visibility maps

On 12/3/24 08: 32, Tefft, Michael J wrote: > We have some batch queries 
that had occasionally having degraded > runtimes: from 2 hours degrading 
to 16 hours, etc. > > Comparing plans from good and bad runs, we saw 
that the good plans


On 12/3/24 08:32, Tefft, Michael J wrote:

We have some batch queries that had occasionally having degraded 



runtimes: from 2 hours degrading to 16 hours, etc.






Comparing plans from good and bad runs, we saw that the good plans used 



index-only scans on table “x”, while the bad plans used index scans.






Using the pg_visibility utility, we found that all of the 83 partitions 


of table “x” were showing zero blocks where all tuples were visible. We 


ran a VACUUM on the table; the visibility maps are now clean and the 



good plans came back.







Our question is: why did autovacuum not spare us from this?






We are using default autovacuum parameters for all except 


log_autovacuum_min_duration=5000. These partitions are populated by 



processes that do a truncate + a single insert-select.






We see autovacuum failure (failed to get lock) messages, followed by a 


success message, in the log for one of these partitions (the biggest 



one) but even that partition showed zero blocks with all tuples visible.







Are we wrong to expect autovacuum to clean up the visibility map?


I have to believe it is due to this:

https://urldefense.com/v3/__https://www.postgresql.org/docs/current/routine-vacuuming.html*VACUUM-FOR-SPACE-RECOVERY__;Iw!!Lf_9VycLqA!mGufXaOdGX6PdXSpHcIUnIF1pe8evFpE7r-l4vJVUcoY--jp8LtF-jWv8YicvFWegi1-_jyxJnNx3YBvbxQOracZSxzvbw$
 


"If you have a table whose entire contents are deleted on a periodic

basis, consider doing it with TRUNCATE rather than using DELETE followed

by VACUUM. TRUNCATE removes the entire content of the table immediately,

without requiring a subsequent VACUUM or VACUUM FULL to reclaim the

now-unused disk space. The disadvantage is that strict MVCC semantics

are violated."

Combined with this:

https://urldefense.com/v3/__https://www.postgresql.org/docs/current/runtime-config-autovacuum.html*GUC-AUTOVACUUM-VACUUM-INSERT-THRESHOLD__;Iw!!Lf_9VycLqA!mGufXaOdGX6PdXSpHcIUnIF1pe8evFpE7r-l4vJVUcoY--jp8LtF-jWv8YicvFWegi1-_jyxJnNx3YBvbxQOraeerEd0yw$
 


"autovacuum_vacuum_threshold

Specifies the minimum number of updated or deleted tuples needed to

trigger a VACUUM in any one table. ...

"

I'm going to say the TRUNCATE itself does not trigger an autovacuum. I

would suggest throwing a manual VACUUM in the table population script.






postgres=# select version();







   version







--






PostgreSQL 14.13 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 



20210514 (Red Hat 8.5.0-22), 64-bit







Thank you,







Mike Tefft






--

Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Best Practices for Managing Schema Changes Dynamically with libpq

2024-12-03 Thread Ron Johnson
On Tue, Dec 3, 2024 at 12:44 PM Sasmit Utkarsh 
wrote:
[snip]

> How can we efficiently handle table additions/updates while keeping the
> application and database in sync dynamically?
>

Enumerate all relevant column names in SELECT and INSERT statements.  That
way, the application still works when you add columns or alter(*) data
types.  Dropping columns will still break your app.

*Altering to from a numeric type to a text type might still kill your app,
if it can't convert the string into your app's int or float type..

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Unable to Recover a Deleted Database Using PITR

2024-12-03 Thread Scott Taylor
Database randomdata did not fully restore.
This consistently does not work.
Am I missing a step or concept about how PITR works?
I am using postgres version: PostgreSQL 17.2 on x86_64-windows

Steps:
1) Updated postgresql.conf:
archive_mode = on
archive_command = 'copy "%p" "C:\\PostgresArchive\\Wal-Archive\\%f"'
log_statement = mod
summarize_wal = on

2) Re-started postgres server

3) Created a database "randomdata" and a table, inserted data:
*2024-12-02 13:57:24*.049 EST [31268] LOG:  statement: insert into
somedata (serialnumber, firstname, lastname)

4) Ran pg_basebackup

5) Dropped database:
   * 2024-12-02 14:01:27*.243 EST [19148] LOG:  statement: DROP DATABASE
randomdata;

6) Stopped postgres server

7) Removed contents of data folder: C:\Program Files\PostgreSQL\17\data

8) Extracted base.tar.gz (from pg_basebackup) into C:\Program
Files\PostgreSQL\17\data

9) Removed contents of pg_wal folder: C:\Program
Files\PostgreSQL\17\data\pg_wal

10) Added to postgresql.conf:
restore_command = 'copy "C:\\PostgresArchive\\Wal-Archive\\%f" "%p"'
recovery_target_time = '*2024-12-02 13:57:24*' (time from log file when
insert occured, see step 2)
recovery_target_action = promote

11) Created recovery.signal file in postgres data folder

12) Started postgres server

13) Attempted to connect to database using psql:
postgres=# \c randomdata;
connection to server at "localhost" (::1), port 5432 failed: FATAL:
 cannot connect to invaliddatabase "randomdata"
HINT:  Use DROP DATABASE to drop invalid databases.
Previous connection kept

14) Noticed recovery.signal file was removed

*Log file:*
2024-12-02 13:57:23.852 EST [32352] LOG:  statement: create database
randomdata;
2024-12-02 13:57:24.043 EST [31268] LOG:  statement: create table somedata
(id serial primary key, serialnumber integer, firstname text, lastname
text);
2024-12-02 13:57:24.049 EST [31268] LOG:  statement: insert into somedata
(serialnumber, firstname, lastname)

*- Ran pg_basebackup*

2024-12-02 14:01:27.243 EST [19148] LOG:  statement: DROP DATABASE
randomdata;

*- Deleted Data folder, then restored from backup file and restarted server*

2024-12-02 14:04:21.630 EST [20156] LOG:  database system was interrupted;
last known up at 2024-12-02 14:00:22 EST
2024-12-02 14:04:21.630 EST [20156] LOG:  creating missing WAL directory
"pg_wal/archive_status"
2024-12-02 14:04:21.630 EST [20156] LOG:  creating missing WAL directory
"pg_wal/summaries"
2024-12-02 14:04:22.792 EST [20156] LOG:  starting backup recovery with
redo LSN 0/328, checkpoint LSN 0/380, on timeline ID 1
2024-12-02 14:04:22.805 EST [20156] LOG:  restored log file
"00010003" from archive
2024-12-02 14:04:22.818 EST [20156] LOG:  starting point-in-time recovery
to 2024-12-02 13:57:24-05
2024-12-02 14:04:22.825 EST [20156] LOG:  redo starts at 0/328
2024-12-02 14:04:22.826 EST [20156] LOG:  completed backup recovery with
redo LSN 0/328 and end LSN 0/3000120
2024-12-02 14:04:22.826 EST [20156] LOG:  consistent recovery state reached
at 0/3000120
2024-12-02 14:04:22.843 EST [20156] LOG:  restored log file
"00010004" from archive
2024-12-02 14:04:22.871 EST [20156] LOG:  recovery stopping before commit
of transaction 969, time 2024-12-02 14:01:27.281164-05
2024-12-02 14:04:22.871 EST [20156] LOG:  redo done at 0/40006F8 system
usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.04 s
2024-12-02 14:04:22.879 EST [20156] LOG:  selected new timeline ID: 2
2024-12-02 14:04:22.940 EST [20156] LOG:  archive recovery complete
2024-12-02 14:04:22.941 EST [30656] LOG:  checkpoint starting:
end-of-recovery immediate wait
2024-12-02 14:04:22.952 EST [30656] LOG:  checkpoint complete: wrote 4
buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.001 s,
sync=0.003 s, total=0.012 s; sync files=3, longest=0.001 s, average=0.001
s; distance=16385 kB, estimate=16385 kB; lsn=0/40006F8, redo lsn=0/40006F8

2024-12-02 14:05:38.142 EST [15876] FATAL:  cannot connect to invalid
database "randomdata"
2024-12-02 14:05:38.142 EST [15876] HINT:  Use DROP DATABASE to drop
invalid databases.


Forcing autocomplete on keypress

2024-12-03 Thread Zac Warham
We have a docker setup for pgadmin and we want to force autocomplete on 
keypress at launch (not through the preferences) using an environment variable 
or config of some sort. I tried modifying the file at 
https://github.com/pgadmin-org/pgadmin4/blob/master/web/pgadmin/tools/sqleditor/utils/query_tool_preferences.py#L792
 and replacing False with True and this works but then breaks postgres in other 
ways. Is there a proper way of doing this beyond modifying this file?



Re: Errors when restoring backup created by pg_dumpall

2024-12-03 Thread PopeRigby

On 12/2/24 17:17, Tom Lane wrote:

PopeRigby  writes:

On 12/1/24 12:15, Tom Lane wrote:

Cool.  You did actually install the new scripts into your target
installation, right?

Oh, is applying the patch and rebuilding PostgreSQL not enough?

Not unless you did "make install" in the contrib/earthdistance
directory (or something wider-scope that would invoke that).

regards, tom lane


Well, I did actually compile PostgreSQL after applying the patch.





Re: Autovacuum and visibility maps

2024-12-03 Thread Ron Johnson
On Tue, Dec 3, 2024 at 11:57 AM Adrian Klaver 
wrote:
[snip]

>
> I have to believe it is due to this:
>
>
> https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-SPACE-RECOVERY
>
> "If you have a table whose entire contents are deleted on a periodic
> basis, consider doing it with TRUNCATE rather than using DELETE followed
> by VACUUM. TRUNCATE removes the entire content of the table immediately,
> without requiring a subsequent VACUUM or VACUUM FULL to reclaim the
> now-unused disk space. The disadvantage is that strict MVCC semantics
> are violated."
>
> Combined with this:
>
>
> https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-INSERT-THRESHOLD
>
> "autovacuum_vacuum_threshold
>
> Specifies the minimum number of updated or deleted tuples needed to
> trigger a VACUUM in any one table. ...
>
> "
>
> I'm going to say the TRUNCATE itself does not trigger an autovacuum. I
> would suggest throwing a manual VACUUM in the table population script.
>

Shouldn't autovacuum_vacuum_insert_threshold kick off an autovacuum if
you're doing a lot of inserts?

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


RE: Autovacuum and visibility maps

2024-12-03 Thread Tefft, Michael J
Thanks for the point about truncates versus deletes.
But most of these partitions have over 100k rows, all inserted at once. We have 
the default setting:
#autovacuum_vacuum_insert_threshold = 1000  # min number of row inserts

So I thought we should be triggering by inserts.

Mike

From: Adrian Klaver 
Sent: Tuesday, December 3, 2024 11:57 AM
To: Tefft, Michael J ; 
pgsql-general@lists.postgresql.org
Subject: Re: Autovacuum and visibility maps

On 12/3/24 08: 32, Tefft, Michael J wrote: > We have some batch queries that 
had occasionally having degraded > runtimes: from 2 hours degrading to 16 
hours, etc. > > Comparing plans from good and bad runs, we saw that the good 
plans


On 12/3/24 08:32, Tefft, Michael J wrote:

> We have some batch queries that had occasionally having degraded

> runtimes: from 2 hours degrading to 16 hours, etc.

>

> Comparing plans from good and bad runs, we saw that the good plans used

> index-only scans on table “x”, while the bad plans used index scans.

>

> Using the pg_visibility utility, we found that all of the 83 partitions

> of table “x” were showing zero blocks where all tuples were visible. We

> ran a VACUUM on the table; the visibility maps are now clean and the

> good plans came back.

>

> Our question is: why did autovacuum not spare us from this?

>

> We are using default autovacuum parameters for all except

> log_autovacuum_min_duration=5000. These partitions are populated by

> processes that do a truncate + a single insert-select.

>

> We see autovacuum failure (failed to get lock) messages, followed by a

> success message, in the log for one of these partitions (the biggest

> one) but even that partition showed zero blocks with all tuples visible.

>

> Are we wrong to expect autovacuum to clean up the visibility map?



I have to believe it is due to this:



https://urldefense.com/v3/__https://www.postgresql.org/docs/current/routine-vacuuming.html*VACUUM-FOR-SPACE-RECOVERY__;Iw!!Lf_9VycLqA!mGufXaOdGX6PdXSpHcIUnIF1pe8evFpE7r-l4vJVUcoY--jp8LtF-jWv8YicvFWegi1-_jyxJnNx3YBvbxQOracZSxzvbw$



"If you have a table whose entire contents are deleted on a periodic

basis, consider doing it with TRUNCATE rather than using DELETE followed

by VACUUM. TRUNCATE removes the entire content of the table immediately,

without requiring a subsequent VACUUM or VACUUM FULL to reclaim the

now-unused disk space. The disadvantage is that strict MVCC semantics

are violated."



Combined with this:



https://urldefense.com/v3/__https://www.postgresql.org/docs/current/runtime-config-autovacuum.html*GUC-AUTOVACUUM-VACUUM-INSERT-THRESHOLD__;Iw!!Lf_9VycLqA!mGufXaOdGX6PdXSpHcIUnIF1pe8evFpE7r-l4vJVUcoY--jp8LtF-jWv8YicvFWegi1-_jyxJnNx3YBvbxQOraeerEd0yw$



"autovacuum_vacuum_threshold



Specifies the minimum number of updated or deleted tuples needed to

trigger a VACUUM in any one table. ...



"



I'm going to say the TRUNCATE itself does not trigger an autovacuum. I

would suggest throwing a manual VACUUM in the table population script.



>

> postgres=# select version();

>

>   version

>

> --

>

> PostgreSQL 14.13 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0

> 20210514 (Red Hat 8.5.0-22), 64-bit

>

> Thank you,

>

> Mike Tefft

>



--

Adrian Klaver

adrian.kla...@aklaver.com




Re: Autovacuum and visibility maps

2024-12-03 Thread Ron Johnson
When in doubt, "manually" vacuum and/or analyze.

Maybe even disable autovacuum on that table before the TRUNCATE + INSERT,
do the "manual" vacuum-analyze and then re-enable autovacuum.  Bonus points
for programmatically determining which partitions you're going to insert
into, so that you only manually maintain those partitions.

On Tue, Dec 3, 2024 at 1:11 PM Tefft, Michael J 
wrote:

> Thanks for the point about truncates versus deletes.
>
> But most of these partitions have over 100k rows, all inserted at once. We
> have the default setting:
>
> #autovacuum_vacuum_insert_threshold = 1000  # min number of row inserts
>
>
>
> So I thought we should be triggering by inserts.
>
>
>
> Mike
>
>
>
> *From:* Adrian Klaver 
> *Sent:* Tuesday, December 3, 2024 11:57 AM
> *To:* Tefft, Michael J ;
> pgsql-general@lists.postgresql.org
> *Subject:* Re: Autovacuum and visibility maps
>
>
>
> On 12/3/24 08: 32, Tefft, Michael J wrote: > We have some batch queries
> that had occasionally having degraded > runtimes: from 2 hours degrading to
> 16 hours, etc. > > Comparing plans from good and bad runs, we saw that the
> good plans
>
> On 12/3/24 08:32, Tefft, Michael J wrote:
>
> > We have some batch queries that had occasionally having degraded
>
> > runtimes: from 2 hours degrading to 16 hours, etc.
>
> >
>
> > Comparing plans from good and bad runs, we saw that the good plans used
>
> > index-only scans on table “x”, while the bad plans used index scans.
>
> >
>
> > Using the pg_visibility utility, we found that all of the 83 partitions
>
> > of table “x” were showing zero blocks where all tuples were visible. We
>
> > ran a VACUUM on the table; the visibility maps are now clean and the
>
> > good plans came back.
>
> >
>
> > Our question is: why did autovacuum not spare us from this?
>
> >
>
> > We are using default autovacuum parameters for all except
>
> > log_autovacuum_min_duration=5000. These partitions are populated by
>
> > processes that do a truncate + a single insert-select.
>
> >
>
> > We see autovacuum failure (failed to get lock) messages, followed by a
>
> > success message, in the log for one of these partitions (the biggest
>
> > one) but even that partition showed zero blocks with all tuples visible.
>
> >
>
> > Are we wrong to expect autovacuum to clean up the visibility map?
>
>
>
> I have to believe it is due to this:
>
>
>
> https://urldefense.com/v3/__https://www.postgresql.org/docs/current/routine-vacuuming.html*VACUUM-FOR-SPACE-RECOVERY__;Iw!!Lf_9VycLqA!mGufXaOdGX6PdXSpHcIUnIF1pe8evFpE7r-l4vJVUcoY--jp8LtF-jWv8YicvFWegi1-_jyxJnNx3YBvbxQOracZSxzvbw$
>  
> 
>
>
>
> "If you have a table whose entire contents are deleted on a periodic
>
> basis, consider doing it with TRUNCATE rather than using DELETE followed
>
> by VACUUM. TRUNCATE removes the entire content of the table immediately,
>
> without requiring a subsequent VACUUM or VACUUM FULL to reclaim the
>
> now-unused disk space. The disadvantage is that strict MVCC semantics
>
> are violated."
>
>
>
> Combined with this:
>
>
>
> https://urldefense.com/v3/__https://www.postgresql.org/docs/current/runtime-config-autovacuum.html*GUC-AUTOVACUUM-VACUUM-INSERT-THRESHOLD__;Iw!!Lf_9VycLqA!mGufXaOdGX6PdXSpHcIUnIF1pe8evFpE7r-l4vJVUcoY--jp8LtF-jWv8YicvFWegi1-_jyxJnNx3YBvbxQOraeerEd0yw$
>  
> 
>
>
>
> "autovacuum_vacuum_threshold
>
>
>
> Specifies the minimum number of updated or deleted tuples needed to
>
> trigger a VACUUM in any one table. ...
>
>
>
> "
>
>
>
> I'm going to say the TRUNCATE itself does not trigger an autovacuum. I
>
> would suggest throwing a manual VACUUM in the table population script.
>
>
>
> >
>
> > postgres=# select version();
>
> >
>
> >   version
>
> >
>
> > --
>
> >
>
> > PostgreSQL 14.13 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0
>
> > 20210514 (Red Hat 8.5.0-22), 64-bit
>
> >
>
> > Thank you,
>
> >
>
> > Mike Tefft
>
> >
>
>
>
> --
>
> Adrian Klaver
>
> adrian.kla...@aklaver.com
>
>
>
>

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!