What specific circumstances trigger Autovacuum wraparound

2018-06-04 Thread Daniel Lagerman
Hello!

Please note that I have read:
https://www.postgresql.org/docs/current/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

And I think i understand how the settings are supposed work, however I do
have a question in regards to Autovaccum wrap-around and how it is
triggered. I have a pg 9.4.3 server that has one table with calculate age
of about 320 million, this is the oldest table in this database and
therefore the age of the DB is the same as the table. This is calculated
with the SQL provided on the page above.

Before I get into the settings I have I will explain the behavior. Right
now there is 6 Autovaccum wraparound running, I have 6 workers configured
so nothing strange about that, but what puzzles me is that only 3 out of 6
tables has not recently completed an Autovaccum and only one table was over
the old limit when it started.

The settings that are set are as follows:
vacuum_freeze_min_age = 1000
autovacuum_freeze_max_age = 6
vacuum_freeze_table_age = 4
vacuum_cost_delay = 10
vacuum_cost_limit = 2000
autovacuum_vacuum_cost_limit = 200
autovacuum_vacuum_cost_delay = 20

All other settings for vacuum/autovacuum are PG 9.4.3 default. And server
has been restarted after settings were changed.

If I read the article above and I quote "autovacuum will be invoked on it
approximately once every autovacuum_freeze_max_age minus
vacuum_freeze_min_age transactions" to me it indicates that this would be
around every 590 million transactions after I did my change. But maybe it
is to late once it has triggered? With the default values it would be
around 150 million which seems to add up more that what I see.

I also read and I quote " so it can be useful to try to maximize the
interval between forced autovacuums on very large static tables. Obviously
one can do this either by increasing autovacuum_freeze_max_age or
decreasing vacuum_freeze_min_age "

In my scenario the table is not static, but I interpret that you could
delay vacuum by changing these parameters.

What also puzzles is that even once PG completes an Autovaccum (all table
except 3 right now) the Autovacuum wraparound just loops over every table
once more. Some tables have been Autovaccumed wraparound 4-5 times since
the last restart of PG server 2 days ago. It seems to me that due to that
the database itself is over the default limit Autovacuum is running on a
loop for all tables to try and get it below that limit.

There are no normal vacuums as I can see, which means I can't run anything
like flexible freeze during off hours (middle of the night) without doing a
manual cancel query on the Autovaccum which might not be a good idea.

I did attempt to run flexible freeze for 45 hours during the weekend (with
very aggressive settings compared to Autovaccum),  and it did not complete
the table which is around 1 TB large in that time. I'm worried that these
two vacuums are fighting/resetting each other and will never complete if I
keep doing that, so I have stopped that for now.

I have another server, same specs, where the same table is actually larger
in disk size, than the server i'm currently working on, where I do not see
this behavior and vacuum actually completed in a couple of days of running.

Hopefully I have not missed anything very obvious. I'm most interested in
and explanation of why PG is behaving as it is, as it's always good to
understand why something is happening even if you can't change it and also
in regards to cancel Autovacuum to run flexible freeze during off hours in
order to try and speed up the process, is it setting me back or not.

Best regards

Daniel


Re: What specific circumstances trigger Autovacuum wraparound

2018-06-05 Thread Daniel Lagerman
Hello David!

Thanks for taking the time to respond. Upgrading is not in scope right now
for this server even though that is certainly something I will do down the
line.
I talked to a colleague of mine and he dug up something very interesting
and probably answers the behavior I'm seeing, from PG archives. So I
thought I would post it here so that if this question comes up again its
more easy to find it in the future for PG 9.4 specifically.

There is another setting that controls Autovacuum called
autovacuum_multixact_freeze_max_age, in PG 9.5 it is easy to get the
correct value for the table to see if you hit the default limit of 400
million, however in PG 9.4 and earlier the function mxid_age() does not
exist, so you can't query this value directly, hence this reference from
this PG Archive post is crucial:
http://www.postgresql-archive.org/could-not-access-status-of-transaction-pg-multixact-issue-td5822248.html

The thread above reference a bug thread BUG #11264
https://www.postgresql.org/message-id/flat/20140827175105.GI7046%40eldon.alvh.no-ip.org#20140827175105.gi7...@eldon.alvh.no-ip.org
And in that thread we can see Alvaro Herrera write the following paragraph:

"A better way not involving mxid_age() would be to use pg_controldata to
extract the current value of the mxid counter, then subtract the current
relminmxid from that value."

Given that statement we took a look at this server and the table in
question. We could see that NextMultiXactId:  2640838929 and the relminmxid
for that specific table was 1791982183. Now if you take 2640838929 -
1791982183 = 848856746 you can see that this value is the double of the
default value of 400 million and would explain why vacuum is running:

"As a safety device, a whole-table vacuum scan will occur for any table
whose multixact-age is greater than autovacuum_multixact_freeze_max_age.
Whole-table vacuum scans will also occur progressively for all tables,
starting with those that have the oldest multixact-age, if the amount of
used member storage space exceeds the amount 50% of the addressible storage
space. Both of these kinds of whole-table scans will occur even if
autovacuum is nominally disabled."

This calculation is also confirmed by looking at the git commit of
mxid_age() this is exactly what this function returns and even though the 5
other tables are not over this value we suspect that due to that we are
using more than 50% of the addressable storage space Autovacuum takes to
"opportunity" to run for these tables also.

Now you could of course increase this value also, not sure what the max is,
but I rather want to complete vacuum. So the plan for me now is that during
the weekend, where there is no activity at all except for a backup, is to
increase the work memory from relatively low 1024 MB to use almost all
memory I have on this server, set the server to use one worker and then see
if Autovacuum by itself can solve the cleanup by itself or at least close
the gap so that are confident that we will catch up.

Best regards

Daniel


On Mon, Jun 4, 2018 at 11:47 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Monday, June 4, 2018, Daniel Lagerman  wrote:
>
>>  I have a pg 9.4.3 server that has one table with calculate age of about
>> 320 million, this is the oldest table in this database and therefore the
>> age of the DB is the same as the table.
>>
>
> It would be advisable to simply upgrade to 9.4.18 and see if any of the
> various vacuum related bug fixes solves your problem.  That's about the
> extent of my useful advice for this...
>
> David J.
>


Impact of multixact "members" limit exceeded

2018-06-13 Thread Daniel Lagerman
Hello!

I have a question where I'm trying to determine the seriousness of the
problem.

I repacked a table to give space back to the OS and that was fine. However
since then I just noted that I see multixact "members" limit exceeded every
time there is an insert on that table. A Vacuum has yet to complete on this
table.

However looking into the tables we seem to have correct data in it and
table age is not that high. So there does not seem that any data is
corrupted or getting or getting overwritten and that new data is written
fine into the table.

Vacuum is behind and I know that is an issue, this is a very transaction
heavy database with long running transactions that does not release and I'm
working on getting the vacuum through.

When looking in the the pg lists it seems that everyone agrees that this is
not a good thing.

Álvaro Herrera:
https://www.postgresql.org/message-id/20160517174055.GA158915%40alvherre.pgsql

"In 9.4, not really. In 9.5 there's a function mxid_age() that gives you
the age of a multixact, so you'd grab the oldest from
pg_database.datminmxid and compute the age of that one. Going from the
oldest multi to the oldest offset cannot be done without an additional
function, however. It's much easier to keep track of the oldest file in
$PGDATA/pg_multixact/members/; what you really need to care about is the
size of the "hole" between the newest and the oldest files there. Once
newest starts to stomp on oldest, you're screwed."
Now I'm not sure what this Hole is as new files have stopped being written
altogether.

I also read this answer by Anders Fredund:
https://www.postgresql.org/message-id/20170809175728.opnxie26gtpsrjhk%40alap3.anarazel.de

"> We started feeding it several weeks ago and everything went smoothly
until
> we hit this issue:
>
> 2017-08-09 05:21:50.946 WIB >DETAIL: This command would create a multixact
> with 2 members, but the remaining space is only enough for 0 members.
> 2017-08-09 05:21:50.946 WIB >HINT: Execute a database-wide VACUUM in
> database with OID 20101 with reduced vacuum_multixact_freeze_min_age and
> vacuum_multixact_freeze_table_age settings.

Ugh, that's not good."
But i still can't figure out what the actual impact would be?

As for Vacuum settings they are as follows:

vacuum_freeze_min_age = 1000
autovacuum_freeze_max_age = 6
vacuum_freeze_table_age = 4
vacuum_cost_delay = 10
vacuum_cost_limit = 2000
autovacuum_vacuum_cost_limit = 200
autovacuum_vacuum_cost_delay = 20

vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age  are
the default 5/150 million.

Best regards

Daniel


Re: Impact of multixact "members" limit exceeded

2018-06-13 Thread Daniel Lagerman
Hello Álvaro,

I'm running at 9.4.3, I know its an older version but upgrading it outside
the scope right now for various reasons.
Based on the settings I posted what would be your recommended settings
right now to resolve this situation?

Maintenance is limited to weekends as this is a database which needs to be
online 24/5 and consumes a lot of data and responds to many queries.

I very much appreciate the recommendation but I would also like to now the
impact of this warning, data right now does not seem to be corrupted, get
overwritten, for that matter not insert new data or cause issues when read.
It just seems that is has stopped creating "members" and I can't read into
what problem that would cause, table was around 1.6 TB, it was cleaned up
and the re-packed using pg-repack to go down to around 400 GB in size,
however a freeze vacuum or autovacuum did not complete.

Then when it comes to vacuum, do you think its worth increasing work_mem to
say 200 GB, for the maintenance period only, and let autovacuum  go to town
on the tables with the highest age and lowest minmixid? Or should one run
for example flexible freeze instead with zero pauses as nothing is
happening on the weekends, which means no inbound transactions. Also worth
mentioning is thatI restart the server on the weekend after I have backed
up the data.

I have tried running autovacuum for 40 hours before with 5 GB work_mem, on
the weekend, across 6 tables at the same time and it never completed on the
bigger tables which are around 2 TB.

Best regards

Daniel


On Wed, Jun 13, 2018 at 11:29 PM, Alvaro Herrera 
wrote:

> On 2018-Jun-13, Daniel Lagerman wrote:
>
> > Hello!
> >
> > I have a question where I'm trying to determine the seriousness of the
> > problem.
>
> What's your server version?
>
> The way to reclaim members space is to move forwards the
> pg_database.datminmxid value from the database where it is oldest; and
> the way to move *that* forward is to advance the pg_class.relminmxid
> value from the tables in which it is oldest.  So you need to set a low
> value vacuum_multixact_freeze_min_age and _table_age and vacuum those
> tables.
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: Impact of multixact "members" limit exceeded

2018-06-14 Thread Daniel Lagerman
Hey Lawrence,

Thanks for the input, I will check if this is the case, I do not believe it
is but its worth checking out. To me it looks like normal inserts.
I think we were just insanely behind on the vacuum. I have another server,
same code for the App on top of the DB which does not have this issue.

For sure an upgrade to PG 9.5 or 9.6 will solve ALOT of issues.
Again thanks for the input, always appreciated!

Best regards

Daniel

On Thu, Jun 14, 2018 at 9:03 AM, Lawrence Jones  wrote:

> Hey Daniel,
>
> This may be entirely unrelated to your query, but we’ve previously
> experienced issues with 9.4 and crazy multixact members growth.
>
> After digging into the issue, we found the culprit was code that would
> perform the following actions:
>
> begin;
> for query in many_queries:
>   savepoint ;
>   select * from table where id=‘myid' for update;
>   update table increment(counter) where id=‘myid';
>   release savepoint;
> commit;
>
> While I found it very difficult to find documentation on the purpose of
> multixacts, and I’m certain to have critical misunderstandings around their
> workings, it seemed that Postgres was inferring each new savepoint as a new
> potential transaction that would lock the given row, and each of those
> transactions needed to be added to the on-going multixact members list.
>
> The members list (I believe) is immutable, and adding a new member
> requires construction of a new members list, and so your multixact members
> offset will grow quadratically with the number of times you lock your row.
> Where we were locking repeatedly in a large loop, we were seeing huge
> multixact members growth that would trigger the override for multixact
> vacuums, regardless of disabling or configuring your autovacuum.
>
> If you’re continually seeing multixact members growth that outpaces your
> vacuums, then I suggest having a scan for the above usage pattern, or
> alternatively upgrading. We couldn’t replicate the growth in 9.5 and above,
> as presumably newer Postgres versions correctly no-op when a transaction
> tries locking a row when its parent is already present in the multixact.
>
> This comes with the disclaimer of a Postgres amateur, it simply seemed
> relevant to your situation.
>
> Lawrence
>
>
> On 14 Jun 2018, at 04:47, Daniel Lagerman  wrote:
>
> Hello Álvaro,
>
> I'm running at 9.4.3, I know its an older version but upgrading it outside
> the scope right now for various reasons.
> Based on the settings I posted what would be your recommended settings
> right now to resolve this situation?
>
> Maintenance is limited to weekends as this is a database which needs to be
> online 24/5 and consumes a lot of data and responds to many queries.
>
> I very much appreciate the recommendation but I would also like to now the
> impact of this warning, data right now does not seem to be corrupted, get
> overwritten, for that matter not insert new data or cause issues when read.
> It just seems that is has stopped creating "members" and I can't read into
> what problem that would cause, table was around 1.6 TB, it was cleaned up
> and the re-packed using pg-repack to go down to around 400 GB in size,
> however a freeze vacuum or autovacuum did not complete.
>
> Then when it comes to vacuum, do you think its worth increasing work_mem
> to say 200 GB, for the maintenance period only, and let autovacuum  go to
> town on the tables with the highest age and lowest minmixid? Or should one
> run for example flexible freeze instead with zero pauses as nothing is
> happening on the weekends, which means no inbound transactions. Also worth
> mentioning is thatI restart the server on the weekend after I have backed
> up the data.
>
> I have tried running autovacuum for 40 hours before with 5 GB work_mem, on
> the weekend, across 6 tables at the same time and it never completed on the
> bigger tables which are around 2 TB.
>
> Best regards
>
> Daniel
>
>
> On Wed, Jun 13, 2018 at 11:29 PM, Alvaro Herrera  > wrote:
>
>> On 2018-Jun-13, Daniel Lagerman wrote:
>>
>> > Hello!
>> >
>> > I have a question where I'm trying to determine the seriousness of the
>> > problem.
>>
>> What's your server version?
>>
>> The way to reclaim members space is to move forwards the
>> pg_database.datminmxid value from the database where it is oldest; and
>> the way to move *that* forward is to advance the pg_class.relminmxid
>> value from the tables in which it is oldest.  So you need to set a low
>> value vacuum_multixact_freeze_min_age and _table_age and vacuum those
>> tables.
>>
>> --
>> Álvaro Herrerahttps://www.2ndQuadrant.com/
>> <https://www.2ndquadrant.com/>
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>
>
>


Re: Impact of multixact "members" limit exceeded

2018-06-14 Thread Daniel Lagerman
>
> Hello Daniel
>
> The advice from Lawrence downthread is very much on point.  If you can
> upgrade to 9.5, do it.  He mentions savepoints, but one more case is
> plpgsql blocks with EXCEPTION clauses.
>

Hello Álvaro,

Thanks I'll make sure of both even if I do not believe this is this case
either.

Okay.  There's at least one related bug fix in 9.4.4 (b6a3444fa635) but
> also 8507a5b37bd9 and ec1408155d35 are related, so you'd be better
> served by an upgrade anyway.
>

Not sure how to lookup these commits but for sure there are ALOT of bug
fixes and I'm pushing that we will upgrade as 9.5 and 9.6 has optimizations
an bug fixes that makes my life much easier. It looks to me that we might
have hit the BUG where members runs ahead of offset which I think has been
fixed in later versions.

>
> Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
> tables fully) and vacuum_multixact_freeze_min_age to some value like one
> million (so that they remove most of the oldest multixacts, leaving
> just the frontmost one million).  Then vacuum the tables with the
> highest multixact ages.  Make sure the datminmxid advances in all
> databases (incl. postgres template0 template1); once it has advanced,
> the oldest member files are removed.  Don't waste time processing tables
> with datminmxid higher than the minimum, as that won't free up any
> member space.
>

Thanks, we opted to go with 1 million min and 100 million table age, the
tables in question are all way over that anyway. We completed vacuum on all
but two which are larger tables and I'm running Vacuum there right now
after I also bumped the maintenance work mem a bit. As this is 9.4 I can't
see the progress except that I noted that initialy it uses less CPU and
once it seems to finnaly start working it uses more CPU 10%->50% but that
is fine. So my hope is that the Vacuum completes over night.

Corruption should not occur because 9.4.3 already contains the member
> wraparound protections (commit 3ecab37d97ed).
>

Glad to hear that and I have not seen any problems to be honest except for
the pg_srv log filling up.

Uhh .. pg_repack has been known to cause catalog corruption.  I don't
> know if those bugs have been fixed.  At 2ndQuadrant we recommend *not*
> running pg_repack.
>
>
I know version before 1.2 had some corruption issues, I have never seen it
myself around the 20 times I have been using it.  But you should never say
never right? I did the same operation on a duplicate server and there is no
issues there so I believe we were just way behind on Vacuum.

work_mem does not affect vacuum.  maintenance_work_mem is the one to
> change; autovacuum_work_mem can be used for autovacuum.
>

Yep, that was a typo on my end I was referring to maintenance work mem of
course.

"Zero pauses" (vacuum_cost_delay=0) is best, because it's obviously much
> faster.
>

Yea, I will look into that if needed during the weekend if needed combined
with more maintenance work mem if Vacuum has not completed. I can't really
do Zero Pauses during normal operations as I need to write to the DB.

>
> So what did you do, cancel it?
>

Yes, I actually used flexible freeze with a timeout so It terminated just
before the Maintenance window ended. However now we run straight VACUUM on
these tables due to this problem. I might even opt to not restart the
server this weekend, don't really need a weekly restart to keep it going
since it seems to have a good pace now.

Many thanks for your insight and guidance!


Best regards

Daniel

On Thu, Jun 14, 2018 at 10:37 AM, Alvaro Herrera 
wrote:

> Hello Daniel
>
> The advice from Lawrence downthread is very much on point.  If you can
> upgrade to 9.5, do it.  He mentions savepoints, but one more case is
> plpgsql blocks with EXCEPTION clauses.
>
> On 2018-Jun-13, Daniel Lagerman wrote:
>
> > Hello Álvaro,
> >
> > I'm running at 9.4.3, I know its an older version but upgrading it
> outside
> > the scope right now for various reasons.
>
> Okay.  There's at least one related bug fix in 9.4.4 (b6a3444fa635) but
> also 8507a5b37bd9 and ec1408155d35 are related, so you'd be better
> served by an upgrade anyway.
>
> > Based on the settings I posted what would be your recommended settings
> > right now to resolve this situation?
>
> Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
> tables fully) and vacuum_multixact_freeze_min_age to some value like one
> million (so that they remove most of the oldest multixacts, leaving
> just the frontmost one million).  Then vacuum the tables with the
> highest multixact ages.  Make sure the datminmxid advances in all
> databases (incl. postgres template0 template1); once it has advanc

Re: Impact of multixact "members" limit exceeded

2018-06-14 Thread Daniel Lagerman
> Did you observe whether the vacuumed tables' relminmxid advance?  If it
> didn't, then those vacuums are a waste of time.
>
> Note that freeze_table_age affects when vacuum does a full table scan
> vs. when it skips pages due to visibility map.  A vacuum that does a
> partial scan does *not* advance the relfrozenxid / relminmxid; only
> full-table scan vacuums can do that.  (Except in 9.6.)  If the
> relminmxid ages are younger than your 100 million table_age, vacuum
> won't do a full-table scan.
>

Yes they did, they all advanced, and no they are not younger than 100
million. They were all over the default value of 150 as are the remaining
2, relminxid/relfrozenid advanced, relminxid to match NextMultiXactID and
age went down. But good point to keep in mind.

Best regards

Daniel


On Thu, Jun 14, 2018 at 7:15 PM, Alvaro Herrera 
wrote:

> On 2018-Jun-14, Daniel Lagerman wrote:
>
> > > Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to
> scan
> > > tables fully) and vacuum_multixact_freeze_min_age to some value like
> one
> > > million (so that they remove most of the oldest multixacts, leaving
> > > just the frontmost one million).  Then vacuum the tables with the
> > > highest multixact ages.  Make sure the datminmxid advances in all
> > > databases (incl. postgres template0 template1); once it has advanced,
> > > the oldest member files are removed.  Don't waste time processing
> tables
> > > with datminmxid higher than the minimum, as that won't free up any
> > > member space.
> >
> > Thanks, we opted to go with 1 million min and 100 million table age, the
> > tables in question are all way over that anyway. We completed vacuum on
> all
> > but two which are larger tables and I'm running Vacuum there right now
> > after I also bumped the maintenance work mem a bit. As this is 9.4 I
> can't
> > see the progress except that I noted that initialy it uses less CPU and
> > once it seems to finnaly start working it uses more CPU 10%->50% but that
> > is fine. So my hope is that the Vacuum completes over night.
>
> Did you observe whether the vacuumed tables' relminmxid advance?  If it
> didn't, then those vacuums are a waste of time.
>
> Note that freeze_table_age affects when vacuum does a full table scan
> vs. when it skips pages due to visibility map.  A vacuum that does a
> partial scan does *not* advance the relfrozenxid / relminmxid; only
> full-table scan vacuums can do that.  (Except in 9.6.)  If the
> relminmxid ages are younger than your 100 million table_age, vacuum
> won't do a full-table scan.
>
> Cheers
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>