RE: Autovacuum not functioning for large tables but it is working for few other small tables.
Hi, Please find the Vacuum(verbose) output. Can you please suggest what is the reason? How can we avoid these scenarios? The customer tried to run the VACUUM(verbose) last night, but it was running continuously for 5 hours without any visible progress. So they had to abort it as it was going to exhaust their maintenance window. db_Server14=# VACUUM (VERBOSE) audittraillogentry; INFO: vacuuming "mmsuper.audittraillogentry" INFO: scanned index "audittraillogentry_pkey" to remove 11184539 row versions DETAIL: CPU 25.24s/49.11u sec elapsed 81.33 sec INFO: scanned index "audit_intime_index" to remove 11184539 row versions DETAIL: CPU 23.27s/59.28u sec elapsed 88.63 sec INFO: scanned index "audit_outtime_index" to remove 11184539 row versions DETAIL: CPU 27.02s/55.10u sec elapsed 92.04 sec INFO: scanned index "audit_sourceid_index" to remove 11184539 row versions DETAIL: CPU 110.81s/72.29u sec elapsed 260.71 sec INFO: scanned index "audit_destid_index" to remove 11184539 row versions DETAIL: CPU 100.49s/87.03u sec elapsed 265.00 sec INFO: "audittraillogentry": removed 11184539 row versions in 247622 pages DETAIL: CPU 3.23s/0.89u sec elapsed 6.64 sec INFO: scanned index "audittraillogentry_pkey" to remove 11184545 row versions DETAIL: CPU 25.73s/45.72u sec elapsed 86.59 sec INFO: scanned index "audit_intime_index" to remove 11184545 row versions DETAIL: CPU 34.65s/56.52u sec elapsed 113.52 sec INFO: scanned index "audit_outtime_index" to remove 11184545 row versions DETAIL: CPU 35.55s/61.96u sec elapsed 113.89 sec INFO: scanned index "audit_sourceid_index" to remove 11184545 row versions DETAIL: CPU 120.60s/75.17u sec elapsed 286.78 sec INFO: scanned index "audit_destid_index" to remove 11184545 row versions DETAIL: CPU 111.87s/93.74u sec elapsed 295.05 sec INFO: "audittraillogentry": removed 11184545 row versions in 1243407 pages DETAIL: CPU 20.35s/6.45u sec elapsed 71.61 sec INFO: scanned index "audittraillogentry_pkey" to remove 11184547 row versions DETAIL: CPU 21.84s/43.36u sec elapsed 71.72 sec INFO: scanned index "audit_intime_index" to remove 11184547 row versions DETAIL: CPU 33.37s/57.07u sec elapsed 99.50 sec INFO: scanned index "audit_outtime_index" to remove 11184547 row versions DETAIL: CPU 35.08s/60.08u sec elapsed 110.08 sec INFO: scanned index "audit_sourceid_index" to remove 11184547 row versions DETAIL: CPU 117.72s/72.75u sec elapsed 256.31 sec INFO: scanned index "audit_destid_index" to remove 11184547 row versions DETAIL: CPU 103.46s/77.43u sec elapsed 247.23 sec INFO: "audittraillogentry": removed 11184547 row versions in 268543 pages DETAIL: CPU 4.36s/1.35u sec elapsed 9.61 sec INFO: scanned index "audittraillogentry_pkey" to remove 11184521 row versions DETAIL: CPU 26.64s/45.46u sec elapsed 80.51 sec INFO: scanned index "audit_intime_index" to remove 11184521 row versions DETAIL: CPU 35.05s/59.11u sec elapsed 111.23 sec INFO: scanned index "audit_outtime_index" to remove 11184521 row versions DETAIL: CPU 32.98s/56.41u sec elapsed 105.93 sec INFO: scanned index "audit_sourceid_index" to remove 11184521 row versions DETAIL: CPU 117.13s/71.14u sec elapsed 254.33 sec INFO: scanned index "audit_destid_index" to remove 11184521 row versions DETAIL: CPU 99.93s/81.77u sec elapsed 241.83 sec INFO: "audittraillogentry": removed 11184521 row versions in 268593 pages DETAIL: CPU 3.49s/1.14u sec elapsed 6.87 sec INFO: scanned index "audittraillogentry_pkey" to remove 11184534 row versions DETAIL: CPU 22.73s/42.41u sec elapsed 69.12 sec INFO: scanned index "audit_intime_index" to remove 11184534 row versions DETAIL: CPU 36.78s/68.04u sec elapsed 121.60 sec INFO: scanned index "audit_outtime_index" to remove 11184534 row versions DETAIL: CPU 31.11s/52.88u sec elapsed 93.93 sec INFO: scanned index "audit_sourceid_index" to remove 11184534 row versions DETAIL: CPU 117.95s/72.65u sec elapsed 247.44 sec INFO: scanned index "audit_destid_index" to remove 11184534 row versions DETAIL: CPU 104.25s/82.63u sec elapsed 248.43 sec INFO: "audittraillogentry": removed 11184534 row versions in 268598 pages DETAIL: CPU 3.74s/1.17u sec elapsed 9.45 sec INFO: scanned index "audittraillogentry_pkey" to remove 11184546 row versions DETAIL: CPU 21.24s/40.72u sec elapsed 68.78 sec INFO: scanned index "audit_intime_index" to remove 11184546 row versions DETAIL: CPU 34.29s/56.72u sec elapsed 99.63 sec INFO: scanned index "audit_outtime_index" to remove 11184546 row versions DETAIL: CPU 33.83s/60.99u sec elapsed 105.22 sec INFO: scanned index "audit_sourceid_index" to remove 11184546 row versions DETAIL: CPU 114.26s/70.11u sec elapsed 239.56 sec INFO: scanned index "audit_destid_index" to remove 11184546 row versions DETAIL: CPU 100.73s/73.28u sec elapsed 228.37 sec INFO: "audittraillogentry": removed 11184546 row versions in 268538 pages DETAIL: CPU 3.80s/1.18u sec elapsed 7.79 sec INFO: scanned index "audittraillogentry_pkey" to remove 11184523 ro
Re: Autovacuum not functioning for large tables but it is working for few other small tables.
On Fri, 2021-02-19 at 10:51 +, M Tarkeshwar Rao wrote: > Please find the Vacuum(verbose) output. Can you please suggest what is the > reason? > How can we avoid these scenarios? > > The customer tried to run the VACUUM(verbose) last night, but it was running > continuously for 5 hours without any visible progress. So they had to abort > it > as it was going to exhaust their maintenance window. > > db_Server14=# VACUUM (VERBOSE) audittraillogentry; > INFO: vacuuming "mmsuper.audittraillogentry" > INFO: scanned index "audittraillogentry_pkey" to remove 11184539 row versions > DETAIL: CPU 25.24s/49.11u sec elapsed 81.33 sec > INFO: scanned index "audit_intime_index" to remove 11184539 row versions > DETAIL: CPU 23.27s/59.28u sec elapsed 88.63 sec > INFO: scanned index "audit_outtime_index" to remove 11184539 row versions > DETAIL: CPU 27.02s/55.10u sec elapsed 92.04 sec > INFO: scanned index "audit_sourceid_index" to remove 11184539 row versions > DETAIL: CPU 110.81s/72.29u sec elapsed 260.71 sec > [and so on, the same 6 indexes are repeatedly scanned] PostgreSQL performs VACUUM in batches of "maintenance_work_mem" size of tuple identifiers. If that parameter is small, the indexes have to be scanned often. Try increasing "maintenance_work_mem" to 1GB (if you have enough RAM), that will make it faster. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
