vacuumdb --jobs deadlock: how to avoid pg_catalog ?

2020-05-14 Thread Eduard Rozenberg
Hello there,

I'm a long-time postgres user but vacuumdb newbie :).

Using vacuumdb (v12.2) with '---jobs' to vacuum a v9.6 database on localhost 
with parallel processes:

$ time vacuumdb --full --echo -U postgres --jobs=8 -d mydatabase_test 
-p 5434 --password

As shown below I ran into the (100%?) predictable/expected deadlock with sys 
catalog tables documented in the vacuumdb doc page 
(https://www.postgresql.org/docs/12/app-vacuumdb.html 
).

Question: I specified "-d mydatabase_test" - why is it also doing a full vac on 
the system catalog (postgres db) which is practically guaranteed to cause a 
deadlock? I don't need or want it to do a full vac on the postgres db as I can 
do that myself later if I need to, and it takes no time compared to full vac of 
my own 7 TB database that I am "extremely eager" to parallelize.

Is my only option list all schemas.tables and feed to vacuumdb as "-t 
schema1.table1 -t schema1.table2 " ? That's manageable but unfortunate 
as it creates more work (having to list all tables) and adds additional work 
for automation scripts as well.

I've tried searching forums/goog for past issues related to this. I did find an 
old discussion about possible enhancements to "vacuumdb --jobs" to help the 
user avoid these predictable deadlocks, where the conclusion was to let the 
user figure out a way to deal with it such as via "-t table -t table -t table 
...".

Maybe "--jobs" should only apply to the user's own databases, and when vacuumdb 
is working on system catalog tables it should apply a strictly 
serial/sequential approach to handle those tables?

Thanks!


--
output from vacuumdb's --echo
--
...
VACUUM (FULL) mycompanytesting.sometable1;
VACUUM (FULL) index.mydatabasestd_missing;
VACUUM (FULL) mycompanytesting.sometable2;
VACUUM (FULL) mycompanytesting.sometable3;
VACUUM (FULL) pg_catalog.pg_attribute;
VACUUM (FULL) mycompanydata.sometable4;
VACUUM (FULL) pg_catalog.pg_statistic;
VACUUM (FULL) mycompanytesting.sometable5;
VACUUM (FULL) pg_catalog.pg_proc;
VACUUM (FULL) pg_catalog.pg_depend;
VACUUM (FULL) pg_catalog.pg_class;
VACUUM (FULL) pg_catalog.pg_type;
vacuumdb: error: vacuuming of database "mydatabase_test" failed: ERROR:  
deadlock detected
DETAIL:  Process 28183 waits for AccessShareLock on relation 1259 of database 
35239378; blocked by process 28182.
Process 28182 waits for AccessShareLock on relation 1247 of database 35239378; 
blocked by process 28183.
HINT:  See server log for query details.


--
/var/log/postgresql-9.6
--
...
LOG:  received SIGHUP, reloading configuration files
ERROR:  canceling autovacuum task
CONTEXT:  automatic vacuum of table "mydatabase_test.someschema.sometable"
ERROR:  canceling autovacuum task
CONTEXT:  automatic vacuum of table "mydatabase_test.someschema.sometable"
ERROR:  canceling autovacuum task
CONTEXT:  automatic vacuum of table "mydatabase_test.pg_toast.pg_toast_35240266"
LOG:  could not receive data from client: Connection timed out
LOG:  received SIGHUP, reloading configuration files
LOG:  received SIGHUP, reloading configuration files
ERROR:  canceling autovacuum task
CONTEXT:  automatic vacuum of table "mydatabase_test.someschema.sometable"
LOG:  received SIGHUP, reloading configuration files
LOG:  received SIGHUP, reloading configuration files
ERROR:  canceling autovacuum task
CONTEXT:  automatic vacuum of table "mydatabase_test.pg_toast.pg_toast_35240330"
ERROR:  canceling autovacuum task
CONTEXT:  automatic vacuum of table "mydatabase_test.someschema.sometable"
ERROR:  deadlock detected
DETAIL:  Process 28183 waits for AccessShareLock on relation 1259 of database 
35239378; blocked by process 28182.
Process 28182 waits for AccessShareLock on relation 1247 of database 
35239378; blocked by process 28183.
Process 28183: VACUUM (FULL) pg_catalog.pg_type;
Process 28182: VACUUM (FULL) pg_catalog.pg_class;
HINT:  See server log for query details.
STATEMENT:  VACUUM (FULL) pg_catalog.pg_type;
ERROR:  canceling statement due to user request
STATEMENT:  VACUUM (FULL) pg_catalog.pg_class;
ERROR:  canceling statement due to user request
STATEMENT:  VACUUM (FULL) pg_catalog.pg_depend;
ERROR:  canceling statement due to user request
STATEMENT:  VACUUM (FULL) someschema.sometable
LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost
LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost
ERROR:  canceling statement due to user request
STATEMENT:  VACUUM (FULL) mycompanydata.wildcard_replacement_bkp;
LOG:  could not send data to client: Broken pipe
STATEMENT:  VACUUM (FULL) mycompanydata.wildcard_replacement_bkp;
FATAL:  connection to client lost
ERROR:  canceling statement due to user request
STATEMENT:  VACUUM (FULL) pg_catalog.pg_proc;
LOG:  c

Re: vacuumdb --jobs deadlock: how to avoid pg_catalog ?

2020-05-14 Thread Eduard Rozenberg
@Adrian thanks again.

I read the postgres docs the same way - that previously used space is marked as 
available and therefore no need for vacuum full. Previously used = now 
available space, which gets re-used, in theory.

And yet this same DB is growing at 30-50 GB weekly, despite the fact that 2.4 
TB of previously used space is clearly available ("clearly available" as proven 
by vacuum full shrinking the DB space usage by 2.4 TB). I did verify 
postgresql.conf has always been properly configured re: autovacuum:  
'autovacuum = on'and 'track_counts = on'

I'm not planning on running VACUUM FULL regularly, just "this one time". And I 
was trying to to parallelize VACUUM FULL and minimize downtime of the 
production DB caused by table locking. And then I found the option of using 
"vacuumdb --jobs" which sounded like the perfect solution except for "well you 
can't actually use --jobs because you'll run into a deadlock and everybody 
knows that and nobody has a (good) solution for it" :).

--Ed

> On May 14, 2020, at 11:46, Adrian Klaver  wrote:
> 
> On 5/14/20 11:27 AM, Eduard Rozenberg wrote:
>> @Adrian thanks.
>> I did a test vacuum full the "normal" non-parallel way (VACUUM FULL sql) 
>> previously on a test db copy and saw the DB size (postgres 9.6) shrink from 
>> 6.8 TB to 4.4 TB of actual, real disk space usage ($ df -d 1).
>> I don't know the reason so much space was "locked up" (other than there is a 
>> lot of data "churn" from daily updates). But I definitely do need to do the 
>> vac full on the production db to get down to the smaller size - cannot 
>> afford the 2.4 TB of "wasted" space on an ongoing basis.
> 
> It may not be wasted space. A regular VACUUM marks space within a table 
> available for reuse(and not returned to OS) when it removes unneeded tuples. 
> It then fills that space up with new tuples, roughly speaking. So if the 
> vacuuming is working properly you will reach a steady state where space is 
> reused and the database on disk size grows slowly as reusable space is 
> occupied. I would monitor the database size on a regular basis. My guess is 
> that the VACUUM FULL is dropping the OS used space and then it fills up again 
> as the database does those updates.
> 
>> Based on your email it sounds like the vacuuming of those pg_catalog.* 
>> tables is done regardless, as a normal part of doing vac full on my own 
>> database.
>> Unfortunately I still don't see an ideal way to run vacuum full in parallel 
>> via vacuumdb without running into the expected and documented deadlock. Only 
>> method I'm aware of is to list each table individually with "-t table1 -t 
>> table2..." to "vacuum db --jobs" which is not pleasant and not exceedingly 
>> beautiful.
>> Thanks.
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com





Re: vacuumdb --jobs deadlock: how to avoid pg_catalog ?

2020-05-14 Thread Eduard Rozenberg
@Adrian thanks.

I did a test vacuum full the "normal" non-parallel way (VACUUM FULL sql) 
previously on a test db copy and saw the DB size (postgres 9.6) shrink from 6.8 
TB to 4.4 TB of actual, real disk space usage ($ df -d 1).

I don't know the reason so much space was "locked up" (other than there is a 
lot of data "churn" from daily updates). But I definitely do need to do the vac 
full on the production db to get down to the smaller size - cannot afford the 
2.4 TB of "wasted" space on an ongoing basis.

Based on your email it sounds like the vacuuming of those pg_catalog.* tables 
is done regardless, as a normal part of doing vac full on my own database.

Unfortunately I still don't see an ideal way to run vacuum full in parallel via 
vacuumdb without running into the expected and documented deadlock. Only method 
I'm aware of is to list each table individually with "-t table1 -t table2..." 
to "vacuum db --jobs" which is not pleasant and not exceedingly beautiful.

Thanks.

> On May 14, 2020, at 10:35, Adrian Klaver  wrote:
> 
> On 5/14/20 8:35 AM, Eduard Rozenberg wrote:
>> Hello there,
>> I'm a long-time postgres user but vacuumdb newbie :).
>> Using vacuumdb (v12.2) with '---jobs' to vacuum a v9.6 database on localhost 
>> with parallel processes:
>> $ time vacuumdb --full --echo -U postgres --jobs=8 -d mydatabase_test -p 
>> 5434 --password
>> As shown below I ran into the (100%?) predictable/expected deadlock with sys 
>> catalog tables documented in the vacuumdb doc page 
>> (https://www.postgresql.org/docs/12/app-vacuumdb.html).
>> Question: I specified "-d mydatabase_test" - why is it also doing a full vac 
>> on the system catalog (postgres db) which is practically guaranteed 
> 
> The postgres database and the system catalogs are two separate things.
> 
>> to cause a deadlock? I don't need or want it to do a full vac on the 
>> postgres db as I can do that myself later if I need to, and it takes no time 
>> compared to full vac of my own 7 TB database that I am "extremely eager" to 
>> parallelize.
> 
> See:
> 
> https://www.postgresql.org/docs/12/sql-vacuum.html 
> <https://www.postgresql.org/docs/12/sql-vacuum.html>
> 
> "FULL
> 
>Selects “full” vacuum, which can reclaim more space, but takes much longer 
> and exclusively locks the table. This method also requires extra disk space, 
> since it writes a new copy of the table and doesn't release the old copy 
> until the operation is complete. Usually this should only be used when a 
> significant amount of space needs to be reclaimed from within the table.
> "
> 
> Pretty sure all that rewriting of tables bloats the system catalogs and so 
> they get a VACUUM FULL.
> 
> 
> The question here is why is it necessary to do a VACUUM FULL in the first 
> place?
> 
>> Is my only option list all schemas.tables and feed to vacuumdb as "-t 
>> schema1.table1 -t schema1.table2 " ? That's manageable but 
>> unfortunate as it creates more work (having to list all tables) and adds 
>> additional work for automation scripts as well.
>> I've tried searching forums/goog for past issues related to this. I did find 
>> an old discussion about possible enhancements to "vacuumdb --jobs" to help 
>> the user avoid these predictable deadlocks, where the conclusion was to let 
>> the user figure out a way to deal with it such as via "-t table -t table -t 
>> table ...".
>> Maybe "--jobs" should only apply to the user's own databases, and when 
>> vacuumdb is working on system catalog tables it should apply a strictly 
>> serial/sequential approach to handle those tables?
>> Thanks!
>> --
>> output from vacuumdb's --echo
>> --
>> ...
>> VACUUM (FULL) mycompanytesting.sometable1;
>> VACUUM (FULL) index.mydatabasestd_missing;
>> VACUUM (FULL) mycompanytesting.sometable2;
>> VACUUM (FULL) mycompanytesting.sometable3;
>> VACUUM (FULL) pg_catalog.pg_attribute;
>> VACUUM (FULL) mycompanydata.sometable4;
>> VACUUM (FULL) pg_catalog.pg_statistic;
>> VACUUM (FULL) mycompanytesting.sometable5;
>> VACUUM (FULL) pg_catalog.pg_proc;
>> VACUUM (FULL) pg_catalog.pg_depend;
>> VACUUM (FULL) pg_catalog.pg_class;
>> VACUUM (FULL) pg_catalog.pg_type;
>> vacuumdb: error: vacuuming of database "mydatabase_test" failed: ERROR:   
>> deadlock detected
>> DETAIL:  Process 28183 waits for AccessShareLock o