Re: Order by and timestamp SOLVED

2020-03-17 Thread Björn Lundin



>> So insert is
>> bnl@ibm2:~/db$ psql
>> Tidtagning är på.
>> AUTOCOMMIT off
>> psql (9.6.15, server 9.4.15)
>> Skriv "help" för hjälp.
> 
> Except you are using psql 9.6.15 against a 9.4.15 server.
> 
> What happens if you use psql(9.4.15) to do sort query against 9.4.15 server?

So this is more than strange

bnl@ibm2:~$ /usr/lib/postgresql/9.4/bin/psql
Timing is on.
AUTOCOMMIT off
psql (9.4.15)
Type "help" for help.


This. works. I did not realize I ran different client/server versions.
9.6 must have come with a Debian upgrade where I did not migrate the database 
to 9.6, 
but got the psql 9.6 as default (/sr/bin/psql)


However AFTER I checked the bad sorting - sep/oct 2016 and early apr 2017 
With psql 9.4 I connected with psql 9.6 again.
And now the sorting error is  gone her too!

I cannot reproduce it anymore

So I learned this - always use same version of client and server

Many thanks to Adrian and Tom


--
Björn Lundin
b.f.lun...@gmail.com






Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Hi,

is someone using temporary tablespaces on a RAM disk ? Any experiences with 
that?
I did some quick tests and checked the archives but could not find any 
information that either confirmed it is a bad idea nor the opposite.

Thanks in advance
Daniel



Automatic failover

2020-03-17 Thread Sonam Sharma
I have setup replication using repmgr. Wanted to know how much time the
slave node will take to become new primary ?? If any document, can someone
please share of auto failover.  With automatic failover, how much time the
slave takes to become new primary .
Thanks..


Re: Automatic failover

2020-03-17 Thread Paul Foerster
Hi Sonam,

On Tue, Mar 17, 2020 at 11:30 AM Sonam Sharma  wrote:
>
> I have setup replication using repmgr. Wanted to know how much time the slave 
> node will take to become new primary ?? If any document, can someone please 
> share of auto failover.  With automatic failover, how much time the slave 
> takes to become new primary .
> Thanks..

I don't know about repmgr. We use Patroni with etcd. A switchover
takes only a second or two with that. And the (new) replica is usually
in sync again in less than about 5 seconds or so.

Cheers,
Paul




Re: Mixed Locales and Upgrading

2020-03-17 Thread Don Seiler
On Mon, Mar 16, 2020 at 10:28 AM Tom Lane  wrote:

>
> Egad.
>

My thoughts exactly.


> Well, in principle you could likewise manually update pg_database's
> datcollate and datctype columns to say "en_US.utf8".  However, there's
> a much bigger problem here --- what steps if any did this cowboy take
> to ensure that the data inside the database was valid UTF8?
>

No steps that I've seen from the chat history I've been able to search. I'm
not sure if there was an (invalid) assumption that LATIN1 is a subset of
UTF-8 or if it was done in a panic to get the import/update working years
ago.


> I don't think you should use pg_upgrade here at all.  A dump/restore
> is really the only way to make sure that you have validly encoded data.
>

That is what I thought, and probably not what they'll want to hear given
the downtime involved. Even with parallel dump/restore jobs, I imagine it
will take quite a while (this first DB is almost 900GB).


> However, if it's only one database out of a bunch, you could do something
> like
>
> * pg_dump that one database;
> * drop said database;
> * pg_upgrade everything else;
> * restore that one database from dump.
>

In the case of this busy cluster, the layout is like this:

postgres=# \l
 List of databases
   Name|  Owner   | Encoding |  Collate   |   Ctype|   Access
privileges
---+--+--+++---
 f_all | postgres | UTF8 | en_US  | en_US  |
 postgres  | postgres | LATIN1   | en_US  | en_US  | =Tc/postgres
  +
   |  |  |||
postgres=CTc/postgres
 template0 | postgres | LATIN1   | en_US  | en_US  | =c/postgres
   +
   |  |  |||
postgres=CTc/postgres
 template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)

So the template1 DB was dropped and recreated with the collate and ctype as
well and then set to be a template again. But I believe that was well after
f_all was changed (and so probably no need for the template1 change).
In this case, if this is the only DB in the cluster, would it make sense to
just create a new one as en_US.UTF-8 and then restore a dump of f_all
into a pre-created en_US.UTF-8 DB?

We have a few other mixed environments similar to this as well. Some have
postgres and both template DBs with this same UTF8/en_US/en_US
configuration.

Is logical replication an option here? If the target DB were setup as
en_US.UTF-8 across the board, would logical replication safely replicate
and convert the data until we could then cut over?


Thanks,
Don.
-- 
Don Seiler
www.seiler.us


Re: Mixed Locales and Upgrading

2020-03-17 Thread Don Seiler
On Tue, Mar 17, 2020 at 8:06 AM Don Seiler  wrote:

> On Mon, Mar 16, 2020 at 10:28 AM Tom Lane  wrote:
>
>>
> Well, in principle you could likewise manually update pg_database's
>> datcollate and datctype columns to say "en_US.utf8".  However, there's
>> a much bigger problem here --- what steps if any did this cowboy take
>> to ensure that the data inside the database was valid UTF8?
>
>
Is there a way to programmatically check for data that might be a problem
now?

-- 
Don Seiler
www.seiler.us


Re: Mixed Locales and Upgrading

2020-03-17 Thread Tom Lane
Don Seiler  writes:
> On Mon, Mar 16, 2020 at 10:28 AM Tom Lane  wrote:
>> I don't think you should use pg_upgrade here at all.  A dump/restore
>> is really the only way to make sure that you have validly encoded data.

> That is what I thought, and probably not what they'll want to hear given
> the downtime involved. Even with parallel dump/restore jobs, I imagine it
> will take quite a while (this first DB is almost 900GB).

Yikes.  Well, if there aren't obvious operational problems, it might be
that the data is actually UTF8-clean, or almost entirely so.  Maybe you
could look at the problem as being one of validation.  In that case,
it'd be possible to consider not taking the production DB down, but just
doing a pg_dump from it and seeing if you can restore somewhere else.
If not, fix the broken data; repeat till clean.  After that you could
do pg_upgrade with a clear conscience.  I think you'll still end up
manually fixing the inconsistent datcollate/datctype settings though.

> Is logical replication an option here? If the target DB were setup as
> en_US.UTF-8 across the board, would logical replication safely replicate
> and convert the data until we could then cut over?

I think you need to make sure the data is clean first.  I doubt that
logical replication will magically fix any problems in data it's trying
to push over, and I also doubt that we have any really good answer to
what happens if a replication update fails due to bad data.

regards, tom lane




Re: Order by and timestamp SOLVED

2020-03-17 Thread Tom Lane
=?utf-8?Q?Bj=C3=B6rn_Lundin?=  writes:
>> What happens if you use psql(9.4.15) to do sort query against 9.4.15 server?

> However AFTER I checked the bad sorting - sep/oct 2016 and early apr 2017 
> With psql 9.4 I connected with psql 9.6 again.
> And now the sorting error is  gone her too!

Boy ... I don't have any confidence in that answer.  psql does not re-sort
data, nor does it have any way to affect what the server does.  It seems
to me that if this actually is a version inconsistency problem, that's
a bug in itself.

I am starting to wonder though if you had psql's FETCH_COUNT option
active in one configuration and not the other, and if so whether that
could explain anything.

regards, tom lane




Re: Mixed Locales and Upgrading

2020-03-17 Thread Don Seiler
On Tue, Mar 17, 2020 at 8:56 AM Tom Lane  wrote:

>
> Yikes.  Well, if there aren't obvious operational problems, it might be
> that the data is actually UTF8-clean, or almost entirely so.  Maybe you
> could look at the problem as being one of validation.  In that case,
> it'd be possible to consider not taking the production DB down, but just
> doing a pg_dump from it and seeing if you can restore somewhere else.
> If not, fix the broken data; repeat till clean.  After that you could
> do pg_upgrade with a clear conscience.  I think you'll still end up
> manually fixing the inconsistent datcollate/datctype settings though.
>

For this test, would we restore into an en_US.UTF-8/UTF8 database? Then,
assuming no errors (or fixing any errors until clean), we change the
datcollate/datctype settings in prod and proceed with pg_upgrade (obviously
after testing all of that heavily)?

What are the ramifications of changing collation like that? Should we
consider rebuilding indexes ASAP after that?

Don.

-- 
Don Seiler
www.seiler.us


Re: Order by and timestamp SOLVED

2020-03-17 Thread Björn Lundin
17 mars 2020 kl. 15:05 skrev Tom Lane :
> 
> =?utf-8?Q?Bj=C3=B6rn_Lundin?=  writes:
>>> What happens if you use psql(9.4.15) to do sort query against 9.4.15 server?
> 
>> However AFTER I checked the bad sorting - sep/oct 2016 and early apr 2017 
>> With psql 9.4 I connected with psql 9.6 again.
>> And now the sorting error is  gone her too!
> 
> Boy ... I don't have any confidence in that answer.

It is not an answer. It is an observation.

>  psql does not re-sort
> data, nor does it have any way to affect what the server does.

I did not say that. I do not think that.
But - still I got the above result
* bad order with psql 9.6 towards db 9.4
* good order with psql 9.4 towards db 9.4
* good order with psql 9.6 towards db 9.4

>  It seems
> to me that if this actually is a version inconsistency problem, that's
> a bug in itself.
> I am starting to wonder though if you had psql's FETCH_COUNT option
> active in one configuration and not the other, and if so whether that
> could explain anything.

FETCH_COUNT is a variable set in runtime / read by .psqlrc?
Then it is the same. 
I have 1 .psqlrc on ibm2.

bnl@ibm2:~$ locate .psqlrc
/home/bnl/.psqlrc
bnl@ibm2:~$ 

bnl@ibm2:~$ cat .psqlrc
\timing 
\set AUTOCOMMIT 'off' 
\echo 'AUTOCOMMIT' :AUTOCOMMIT



But I do notice that in .psqlrc_history I see strange ’040’ - sometimes 
bnl@ibm2:~$ cat .psql_history
_HiStOrY_V2_
select\040*\040from\040amarkets\040order\040by\040startts;
select\040*\040from\040amarkets\040order\040by\040startts;
select * from amarkets order by startts;  
select\040*\040from\040amarkets\040order\040by\040startts;



So I now did 
bnl@ibm2:~$ find / -name psql
/usr/lib/postgresql/9.4/bin/psql
/usr/lib/postgresql/9.6/bin/psql
^C
bnl@ibm2:~$ /usr/lib/postgresql/9.4/bin/psql
Timing is on.
AUTOCOMMIT off
psql (9.4.15)
Type "help" for help.

bnl=> select * from AEVENTS order by OPENTS;

Time: 278,207 ms


bnl@ibm2:~$ cat .psqlrc
\timing 
\set AUTOCOMMIT 'off' 
\echo 'AUTOCOMMIT' :AUTOCOMMIT

bnl@ibm2:~$ cat .psql_history
_HiStOrY_V2_
select\040*\040from\040amarkets\040order\040by\040startts;
select\040*\040from\040amarkets\040order\040by\040startts;
select\040*\040from\040amarkets\040order\040by\040startts;\040\040
select\040*\040from\040amarkets\040order\040by\040startts;
select\040*\040from\040AEVENTS\040order\040by\040OPENTS;

And yes it is
Trying 9.6

bnl@ibm2:~$ /usr/lib/postgresql/9.6/bin/psql
Tidtagning är på.
AUTOCOMMIT off
psql (9.6.15, server 9.4.15)
Skriv "help" för hjälp.

bnl=> select * from AEVENTS order by OPENTS limit 1;

Tid: 19,240 ms
bnl=> ^D\q



bnl@ibm2:~$ cat .psql_history
_HiStOrY_V2_
select\040*\040from\040amarkets\040order\040by\040startts;
select\040*\040from\040amarkets\040order\040by\040startts;
select\040*\040from\040amarkets\040order\040by\040startts;\040\040
select\040*\040from\040amarkets\040order\040by\040startts;
select\040*\040from\040AEVENTS\040order\040by\040OPENTS;
select\040*\040from\040AEVENTS\040order\040by\040OPENTS\040limit\0401;
bnl@ibm2:~$ 

And now that one too.


I will move from this machine.

Thanks for the suggestions, both Tom and Adrian

I will keep the machine if you are interested of pursuing it,
Otherwise I’ll stop his thread.
That is, I am convinced enough that mixing versions combined with perhaps old 
hardware
together did something strange

--
Björn Lundin
b.f.lun...@gmail.com






Re: Mixed Locales and Upgrading

2020-03-17 Thread Tom Lane
Don Seiler  writes:
> On Tue, Mar 17, 2020 at 8:56 AM Tom Lane  wrote:
>> Yikes.  Well, if there aren't obvious operational problems, it might be
>> that the data is actually UTF8-clean, or almost entirely so.  Maybe you
>> could look at the problem as being one of validation.

> For this test, would we restore into an en_US.UTF-8/UTF8 database? Then,
> assuming no errors (or fixing any errors until clean), we change the
> datcollate/datctype settings in prod and proceed with pg_upgrade (obviously
> after testing all of that heavily)?

Yeah, that's the basic idea.

> What are the ramifications of changing collation like that? Should we
> consider rebuilding indexes ASAP after that?

Text indexes would definitely be at risk here.  I'm not really certain
how bad the problem would be.  Do you have a feeling for how much of
the data is 100% ASCII?  If you could be sure of that for any given
column, you wouldn't have to reindex indexes on that column.

regards, tom lane




Keeping Admin-Owner user but creating new user with effective Admin-Owner access rights?

2020-03-17 Thread AC Gomez
We have the following scenario...

We've inherited a situation where we have a master admin user that's used
across the board for all processes.

We need to undo that one process at a time. So, for each process we thought
of creating two secondary users, among which we will rotate a password.

However, since a PostgreSQL Db cannot have more than one owner then these
secondary users cannot act on the DB objects the same way, that is our
understanding.

The question is, if a DB already has an owner that we want to keep as the
owner for now, can we create an equivalent user that will effectively have
the same behaviour as the owner while not being the owner?

And, will any objects created by this new user be fully accessible by the
original master user?

Thanks


Re: Order by and timestamp SOLVED

2020-03-17 Thread Adrian Klaver

On 3/17/20 12:28 AM, Björn Lundin wrote:





So insert is
bnl@ibm2:~/db$ psql
Tidtagning är på.
AUTOCOMMIT off
psql (9.6.15, server 9.4.15)
Skriv "help" för hjälp.


Except you are using psql 9.6.15 against a 9.4.15 server.

What happens if you use psql(9.4.15) to do sort query against 9.4.15 
server?


So this is more than strange

bnl@ibm2:~$ /usr/lib/postgresql/9.4/bin/psql
Timing is on.
AUTOCOMMIT off
psql (9.4.15)
Type "help" for help.


This. works. I did not realize I ran different client/server versions.
9.6 must have come with a Debian upgrade where I did not migrate the 
database to 9.6,

but got the psql 9.6 as default (/sr/bin/psql)


However AFTER I checked the bad sorting - sep/oct 2016 and early apr 2017
With psql 9.4 I connected with psql 9.6 again.
And now the sorting error is  gone her too!

I cannot reproduce it anymore

So I learned this - always use same version of client and server


I made the suggestion to try the same psql version as the server because 
I had run out of ideas. It is usually not an issue to mix and match 
psql/server versions. In fact the Debian/Ubuntu packaging will by 
default use the latest psql to connect to all versions installed as it 
is backwards compatible. I have also gone the other way older psql/newer 
server, you just lose any added features from the newer psql.


I am still not sure that this can be marked solved. I am trying to 
figure out how running a different version of psql once can affect 
another version of psql. That would seem to imply psql changed something 
on the server and AFAIK sorting/ordering is done by the server not the 
client.




Many thanks to Adrian and Tom


--
Björn Lundin
b.f.lun...@gmail.com 







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Keeping Admin-Owner user but creating new user with effective Admin-Owner access rights?

2020-03-17 Thread Adrian Klaver

On 3/17/20 8:23 AM, AC Gomez wrote:


We have the following scenario...

We've inherited a situation where we have a master admin user that's 
used across the board for all processes.


We need to undo that one process at a time. So, for each process we 
thought of creating two secondary users, among which we will rotate a 
password.


However, since a PostgreSQL Db cannot have more than one owner then 
these secondary users cannot act on the DB objects the same way, that is 
our understanding.


The question is, if a DB already has an owner that we want to keep as 
the owner for now, can we create an equivalent user that will 
effectively have the same behaviour as the owner while not being the owner?


And, will any objects created by this new user be fully accessible by 
the original master user?


Wouldn't INHERIT and IN ROLE work?:

https://www.postgresql.org/docs/12/sql-createrole.html

INHERIT
NOINHERIT

These clauses determine whether a role “inherits” the privileges of 
roles it is a member of. A role with the INHERIT attribute can 
automatically use whatever database privileges have been granted to all 
roles it is directly or indirectly a member of. Without INHERIT, 
membership in another role only grants the ability to SET ROLE to that 
other role; the privileges of the other role are only available after 
having done so. If not specified, INHERIT is the default.

"


"IN ROLE role_name

The IN ROLE clause lists one or more existing roles to which the 
new role will be immediately added as a new member. (Note that there is 
no option to add the new role as an administrator; use a separate GRANT 
command to do that.)

"



Thanks



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Order by and timestamp SOLVED

2020-03-17 Thread Björn Lundin


> 
> I am still not sure that this can be marked solved. I am trying to figure out 
> how running a different version of psql once can affect another version of 
> psql. That would seem to imply psql changed something on the server and AFAIK 
> sorting/ordering is done by the server not the client.
> 

Ok if you want, I can
* keep the server as is
* give you ssh access to it

Let me know, and I’ll mail you (privately) login details 
I am going to move the data, and I have the whole set of daily pg_dumps I need 
to set it up elsewhere. 

--
Björn Lundin
b.f.lun...@gmail.com






RE: Temporary tablespaces on a RAM disk

2020-03-17 Thread Kevin Brannen
Daniel Westermann wrote:
> is someone using temporary tablespaces on a RAM disk ? Any experiences with 
> that?
> I did some quick tests and checked the archives but could not find any 
> information that either confirmed it is a bad idea nor the opposite.


Well, we point our "stats_temp_directory" to a tmpfs partition, which is 
probably fairly common (or so I'd guess).
But a full tablespace with tables and everything? That sounds pretty risky for 
anything other than a place to store
data for transformation or summary of data that could be recreated by 
restarting a process if the server goes down
in the middle.

I think you'd be better off explaining what your goal is and then people could 
give you better comments.

HTH,
Kevin

.
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread David G. Johnston
On Tue, Mar 17, 2020 at 2:49 AM Daniel Westermann (DWE) <
daniel.westerm...@dbi-services.com> wrote:

> is someone using temporary tablespaces on a RAM disk ? Any experiences
> with that?
> I did some quick tests and checked the archives but could not find any
> information that either confirmed it is a bad idea nor the opposite.
>

There is a nice big bold warning callout in the documentation that covers
this explicitly.

https://www.postgresql.org/docs/12/manage-ag-tablespaces.html

David J.


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Ron



On 3/17/20 12:19 PM, David G. Johnston wrote:
On Tue, Mar 17, 2020 at 2:49 AM Daniel Westermann (DWE) 
> wrote:


is someone using temporary tablespaces on a RAM disk ? Any experiences
with that?
I did some quick tests and checked the archives but could not find any
information that either confirmed it is a bad idea nor the opposite.


There is a nice big bold warning callout in the documentation that covers 
this explicitly.


https://www.postgresql.org/docs/12/manage-ag-tablespaces.html


Warning
Placing a tablespace on a temporary file system like a RAM disk risks the 
reliability of the entire cluster.


But aren't temporary files removed when you restart Postgres?  (I'm assuming 
that temp_tablespaces is treated "the same" as data/pgsql_tmp.  Is that a 
false assumption?)



--
Angular momentum makes the world go 'round.


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Tom Lane
Ron  writes:
> On 3/17/20 12:19 PM, David G. Johnston wrote:
>> There is a nice big bold warning callout in the documentation that covers 
>> this explicitly.
>> https://www.postgresql.org/docs/12/manage-ag-tablespaces.html

>>> Warning
>>> Placing a tablespace on a temporary file system like a RAM disk risks the 
>>> reliability of the entire cluster.

> But aren't temporary files removed when you restart Postgres?  (I'm assuming 
> that temp_tablespaces is treated "the same" as data/pgsql_tmp.  Is that a 
> false assumption?)

I think the main issue is that there's no mechanism preventing you from
putting regular (non-temp) tables into the "temporary" tablespace.
If you do, crash recovery will get very unhappy when it tries to replay
updates for those tables and they're not there.

There are probably additional issues having to do with the tablespace
directory(s) disappearing.  That, you could maybe finesse by having the
postmaster start script re-create any missing directories.

All in all, I wouldn't try it without careful pre-testing of what happens
after the RAM disk gets wiped.

People have asked about this before, so maybe it'd be an idea to make
an explicit concept of a temp tablespace that only accepts temp tables,
and do whatever is needful to make that robust.  But I've not heard of
any work towards that.

regards, tom lane




Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
On Tue, Mar 17, 2020 at 2:49 AM Daniel Westermann (DWE) 
mailto:daniel.westerm...@dbi-services.com>> 
wrote:
is someone using temporary tablespaces on a RAM disk ? Any experiences with 
that?
I did some quick tests and checked the archives but could not find any 
information that either confirmed it is a bad idea nor the opposite.

>There is a nice big bold warning callout in the documentation that covers this 
>explicitly.
>
>https://www.postgresql.org/docs/12/manage-ag-tablespaces.html

I know, I am talking about temp_tablespaces only, so temporary objects like 
temporary tables, sorts, .etc.

Regards
Daniel



Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Daniel Westermann wrote:

> is someone using temporary tablespaces on a RAM disk ? Any experiences with 
> that?

> I did some quick tests and checked the archives but could not find any 
> information that either confirmed it is a bad idea nor the opposite.





>Well, we point our "stats_temp_directory" to a tmpfs partition, which is 
>probably fairly common (or so I'd guess).

>But a full tablespace with tables and everything? That sounds pretty risky for 
>anything other than a place to store

>data for transformation or summary of data that could be recreated by 
>restarting a process if the server goes down

>in the middle.



>I think you'd be better off explaining what your goal is and then people could 
>give you better comments.




Thank you for the hint with stats_temp_directory. I am only talking about 
temp_tablespaces which are supposed for temporary objects only. It is pretty 
clear not to use that for persistent objects, I know.


Regards

Daniel


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Ron  writes:
> On 3/17/20 12:19 PM, David G. Johnston wrote:
>> There is a nice big bold warning callout in the documentation that covers
>> this explicitly.
>> https://www.postgresql.org/docs/12/manage-ag-tablespaces.html

>>> Warning
>>> Placing a tablespace on a temporary file system like a RAM disk risks the
>>> reliability of the entire cluster.

> But aren't temporary files removed when you restart Postgres?  (I'm assuming
> that temp_tablespaces is treated "the same" as data/pgsql_tmp.  Is that a
> false assumption?)

>I think the main issue is that there's no mechanism preventing you from
>putting regular (non-temp) tables into the "temporary" tablespace.
>If you do, crash recovery will get very unhappy when it tries to replay
>updates for those tables and they're not there.

Yes, that works and this is why I asked for temp_tablespaces which are supposed 
to hold temporary objects only, I believe.

>There are probably additional issues having to do with the tablespace
>directory(s) disappearing.  That, you could maybe finesse by having the
>postmaster start script re-create any missing directories.

Yes, but the server will start even if the tmpfs is gone, you just need to 
recreate it, which also can be handled with /etc/fstab for booting.

>All in all, I wouldn't try it without careful pre-testing of what happens
>after the RAM disk gets wiped.

Sure.

>People have asked about this before, so maybe it'd be an idea to make
>an explicit concept of a temp tablespace that only accepts temp tables,
>and do whatever is needful to make that robust.  But I've not heard of
>any work towards that.

That's what I thought temp_tablespaces are for ( plus sorts, temporary files 
getting created by materialized views ... )

Regards
Daniel




Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Tom Lane
"Daniel Westermann (DWE)"  writes:
>> People have asked about this before, so maybe it'd be an idea to make
>> an explicit concept of a temp tablespace that only accepts temp tables,
>> and do whatever is needful to make that robust.  But I've not heard of
>> any work towards that.

> That's what I thought temp_tablespaces are for ( plus sorts, temporary files 
> getting created by materialized views ... )

No ... temp_tablespaces says it's okay to use any of the listed
tablespaces to keep temporary working files in, but it doesn't
say that those tablespaces can *only* be used for that.

The whole business of temp tables (as opposed to those invisible-to-SQL
working files) in such a tablespace is a separate issue, too.  I think
that the server would mostly survive having temp-table files disappear
during reboot, but it's not an officially supported or tested scenario.

regards, tom lane




Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Laurenz Albe
On Tue, 2020-03-17 at 09:49 +, Daniel Westermann (DWE) wrote:
> is someone using temporary tablespaces on a RAM disk ? Any experiences with 
> that?
> I did some quick tests and checked the archives but could not find any 
> information that either confirmed it is a bad idea nor the opposite.

Taking a step back, wouldn't it be better to increase "work_mem"
and "temp_buffers" and set "temp_file_limit", so that temporary
files are avoided whenever possible and temporary tables are cached?

Then essentially you are using the same memory for the same purposes,
but in a supported fashion.

Yours,
Laurenz Albe





Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
>"Daniel Westermann (DWE)"  writes:

>>> People have asked about this before, so maybe it'd be an idea to make
>>> an explicit concept of a temp tablespace that only accepts temp tables,
>>> and do whatever is needful to make that robust.  But I've not heard of
>>> any work towards that.

>> That's what I thought temp_tablespaces are for ( plus sorts, temporary files 
>> getting created by materialized views ... )

>No ... temp_tablespaces says it's okay to use any of the listed
>tablespaces to keep temporary working files in, but it doesn't
>say that those tablespaces can *only* be used for that.

Ok, understood. For me, at least, it sounds weird to put anything other than 
real temporary stuff in there.

>The whole business of temp tables (as opposed to those invisible-to-SQL
>working files) in such a tablespace is a separate issue, too.  I think
>that the server would mostly survive having temp-table files disappear
>during reboot, but it's not an officially supported or tested scenario.

Thank you, that is what I wanted to know. I works for all the cases I tested, 
but it is not officially supported.

Regards
Daniel


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Daniel Westermann (DWE)
Hi  Laurenz,

>From: Laurenz Albe 
>Sent: Tuesday, March 17, 2020 20:50
>To: Daniel Westermann (DWE) ; 
>pgsql-general@lists.postgresql.org 
>Subject: Re: Temporary tablespaces on a RAM disk

>On Tue, 2020-03-17 at 09:49 +, Daniel Westermann (DWE) wrote:
>> is someone using temporary tablespaces on a RAM disk ? Any experiences with 
>> that?
>> I did some quick tests and checked the archives but could not find any 
>> information that either confirmed it is a bad idea nor the opposite.

>Taking a step back, wouldn't it be better to increase "work_mem"
>and "temp_buffers" and set "temp_file_limit", so that temporary
>files are avoided whenever possible and temporary tables are cached?

I don't think you can avoid that for all cases, especially when working with 
large data sets. That was one of the reasons for my initial question.

>Then essentially you are using the same memory for the same purposes,
>but in a supported fashion.

Agreed.

Regards
Daniel


Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Laurenz Albe
On Tue, 2020-03-17 at 19:57 +, Daniel Westermann (DWE) wrote:
> >> is someone using temporary tablespaces on a RAM disk ? Any experiences 
> >> with that?
> >> I did some quick tests and checked the archives but could not find any
> >> information that either confirmed it is a bad idea nor the opposite.
> 
> >Taking a step back, wouldn't it be better to increase "work_mem"
> >and "temp_buffers" and set "temp_file_limit", so that temporary
> >files are avoided whenever possible and temporary tables are cached?
> 
> I don't think you can avoid that for all cases, especially when working
> with large data sets. That was one of the reasons for my initial question.

But with your proposed setup, any query that needs more temp space
than your RAM disk has will fail.  I don't think that is good for large
data sets.

Setting "work_mem" and "temp_buffers" high, you can use all the RAM
you have, and if it still is not enough, it spills to disk rather than die.

Yours,
Laurenz Albe





Re: Temporary tablespaces on a RAM disk

2020-03-17 Thread Jerry Sievers
Laurenz Albe  writes:

> On Tue, 2020-03-17 at 19:57 +, Daniel Westermann (DWE) wrote:
>
>> >> is someone using temporary tablespaces on a RAM disk ? Any experiences 
>> >> with that?
>> >> I did some quick tests and checked the archives but could not find any
>> >> information that either confirmed it is a bad idea nor the opposite.
>> 
>> >Taking a step back, wouldn't it be better to increase "work_mem"
>> >and "temp_buffers" and set "temp_file_limit", so that temporary
>> >files are avoided whenever possible and temporary tables are cached?
>> 
>> I don't think you can avoid that for all cases, especially when working
>> with large data sets. That was one of the reasons for my initial question.
>
> But with your proposed setup, any query that needs more temp space
> than your RAM disk has will fail.  I don't think that is good for large
> data sets.

Perhaps not, but disk filling on the same volume as WAL is also a
serious problem in case the process that eventually took the storage to
100% and got an ENoSpace was the WAL write :-)

Er, but any standard 1-tablespace configuration is at risk of that,
generally.

FWIW

>
> Setting "work_mem" and "temp_buffers" high, you can use all the RAM
> you have, and if it still is not enough, it spills to disk rather than die.
>
> Yours,
> Laurenz Albe
>
>
>
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net




Fwd: PG12 autovac issues

2020-03-17 Thread Justin King
Apologies, I accidentally sent this to the pgsql-admin list initially
but intended it go here:

We have a database that isn't overly large (~20G), but gets incredibly
frequent updates.  Here's an example table:

feedi=# select * from pg_stat_all_tables where schemaname =
'production' and relname = 'tita';
relid = 16786
schemaname = production
relname = tita
seq_scan = 23365
seq_tup_read = 403862091
idx_scan = 26612759248
idx_tup_fetch = 19415752701
n_tup_ins = 24608806
n_tup_upd = 4207076934
n_tup_del = 24566916
n_tup_hot_upd = 4073821079
n_live_tup = 79942
n_dead_tup = 71969
n_mod_since_analyze = 12020
last_vacuum = 2020-03-17 15:35:19.588859+00
last_autovacuum = 2020-03-17 21:31:08.248598+00
last_analyze = 2020-03-17 15:35:20.372875+00
last_autoanalyze = 2020-03-17 22:04:41.76743+00
vacuum_count = 9
autovacuum_count = 135693
analyze_count = 9
autoanalyze_count = 495877

As you can see in this table, there are only ~80K rows, but billions
of updates.  What we have observed is that the frozenxid reaches the
200M mark fairly quickly because of the amount of activity.  What is
interesting is that this happens with the 'postgres' and 'template1'
databases as well and there is absolutely no activity in those
databases.

When the 'postgres' and/or 'template1' databases hit the
freeze_max_age, there are cases where it kicks off an aggressive
autovac of those tables which seems to prevent autovacs from running
elsewhere.  Oddly, this is not consistent, but that condition seems to
be required.  We have observed this across multiple PG12 servers (dev,
test, staging, production) all with similar workloads.

$ grep -i vacuum /var/log/postgresql/postgres.log | cut -b 1-9 | uniq -c
 17 Mar 17 06
 34 Mar 17 07
 31 Mar 17 08
 31 Mar 17 09
 30 Mar 17 10
 34 Mar 17 11
 33 Mar 17 12
 19 Mar 17 13
 40 Mar 17 15
 31 Mar 17 16
 36 Mar 17 17
 34 Mar 17 18
 35 Mar 17 19
 35 Mar 17 20
 33 Mar 17 21

As you can see above, we end up having around ~33 autovac/hr, and
about 13:30 today, they stopped until we ran a "vacuum freeze verbose
analyze;" against the 'postgres' database (around 15:30) which then
caused the autovacs to resume running against the "feedi" database.

I'm completely perplexed as to what is happening and why it suddenly
started when we moved from PG10 > PG12.  The configs and workload are
essentially the same between versions.  We realize we could simply
increase the autovacuum_freeze_max_age, but that doesn't seem to
actually resolve anything -- it just pushes the problem out.  Has
anyone seen anything similar to this?

Thanks very much for the consideration.

Justin King
http://flightaware.com/




Re: Fwd: PG12 autovac issues

2020-03-17 Thread Adrian Klaver

On 3/17/20 3:22 PM, Justin King wrote:

Apologies, I accidentally sent this to the pgsql-admin list initially
but intended it go here:

We have a database that isn't overly large (~20G), but gets incredibly
frequent updates.  Here's an example table:

feedi=# select * from pg_stat_all_tables where schemaname =
'production' and relname = 'tita';
relid = 16786
schemaname = production
relname = tita
seq_scan = 23365
seq_tup_read = 403862091
idx_scan = 26612759248
idx_tup_fetch = 19415752701
n_tup_ins = 24608806
n_tup_upd = 4207076934
n_tup_del = 24566916
n_tup_hot_upd = 4073821079
n_live_tup = 79942
n_dead_tup = 71969
n_mod_since_analyze = 12020
last_vacuum = 2020-03-17 15:35:19.588859+00
last_autovacuum = 2020-03-17 21:31:08.248598+00
last_analyze = 2020-03-17 15:35:20.372875+00
last_autoanalyze = 2020-03-17 22:04:41.76743+00
vacuum_count = 9
autovacuum_count = 135693
analyze_count = 9
autoanalyze_count = 495877

As you can see in this table, there are only ~80K rows, but billions
of updates.  What we have observed is that the frozenxid reaches the
200M mark fairly quickly because of the amount of activity.  What is
interesting is that this happens with the 'postgres' and 'template1'
databases as well and there is absolutely no activity in those
databases.

When the 'postgres' and/or 'template1' databases hit the
freeze_max_age, there are cases where it kicks off an aggressive
autovac of those tables which seems to prevent autovacs from running
elsewhere.  Oddly, this is not consistent, but that condition seems to
be required.  We have observed this across multiple PG12 servers (dev,
test, staging, production) all with similar workloads.


Is there anything in postgres and template1 besides what was created at 
init?


What are your settings for autovacuum?:

https://www.postgresql.org/docs/12/runtime-config-autovacuum.html



$ grep -i vacuum /var/log/postgresql/postgres.log | cut -b 1-9 | uniq -c
  17 Mar 17 06
  34 Mar 17 07
  31 Mar 17 08
  31 Mar 17 09
  30 Mar 17 10
  34 Mar 17 11
  33 Mar 17 12
  19 Mar 17 13
  40 Mar 17 15
  31 Mar 17 16
  36 Mar 17 17
  34 Mar 17 18
  35 Mar 17 19
  35 Mar 17 20
  33 Mar 17 21

As you can see above, we end up having around ~33 autovac/hr, and
about 13:30 today, they stopped until we ran a "vacuum freeze verbose
analyze;" against the 'postgres' database (around 15:30) which then
caused the autovacs to resume running against the "feedi" database.

I'm completely perplexed as to what is happening and why it suddenly
started when we moved from PG10 > PG12.  The configs and workload are
essentially the same between versions.  We realize we could simply
increase the autovacuum_freeze_max_age, but that doesn't seem to
actually resolve anything -- it just pushes the problem out.  Has
anyone seen anything similar to this?

Thanks very much for the consideration.

Justin King
http://flightaware.com/





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Order by and timestamp SOLVED

2020-03-17 Thread Adrian Klaver

On 3/17/20 8:57 AM, Björn Lundin wrote:





I am still not sure that this can be marked solved. I am trying to 
figure out how running a different version of psql once can affect 
another version of psql. That would seem to imply psql changed 
something on the server and AFAIK sorting/ordering is done by the 
server not the client.




Ok if you want, I can
* keep the server as is
* give you ssh access to it


Well, some digging around at the above confirmed that the date sorting 
is working correctly now. The only thing I could find is what I take to 
be a discrepancy between the 9.4 and 9.6 instances of psql. Namely that 
the 9.4 instance does not have a link to libm:


linux-vdso.so.1 (0x7ffc381f5000)
libpq.so.5 => /usr/lib/x86_64-linux-gnu/libpq.so.5 (0x7fd35f958000)
libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 
(0x7fd35f6f7000)

libedit.so.2 => /usr/lib/x86_64-linux-gnu/libedit.so.2 (0x7fd35f4bf000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7fd35f12)
libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 
(0x7fd35eeb4000)
libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 
(0x7fd35ea1a000)
libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 
(0x7fd35e7cf000)
libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 
(0x7fd35e57e000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7fd35e361000)
libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 
(0x7fd35df65000)

libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7fd35dd61000)
libncurses.so.5 => /lib/x86_64-linux-gnu/libncurses.so.5 
(0x7fd35db3e000)

libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x7fd35d914000)
libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x7fd35d6fe000)
/lib64/ld-linux-x86-64.so.2 (0x7fd35fe07000)
libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x7fd35d424000)
libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 
(0x7fd35d1f1000)
libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 
(0x7fd35cfed000)
libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 
(0x7fd35cde1000)
libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 
(0x7fd35cbdd000)

libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x7fd35c9c6000)
liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 
(0x7fd35c7b7000)
libsasl2.so.2 => /usr/lib/x86_64-linux-gnu/libsasl2.so.2 
(0x7fd35c59c000)
libgnutls.so.30 => /usr/lib/x86_64-linux-gnu/libgnutls.so.30 
(0x7fd35c203000)

librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7fd35bffb000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x7fd35bde1000)
libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 
(0x7fd35bb7c000)

libidn.so.11 => /lib/x86_64-linux-gnu/libidn.so.11 (0x7fd35b948000)
libtasn1.so.6 => /usr/lib/x86_64-linux-gnu/libtasn1.so.6 
(0x7fd35b735000)
libnettle.so.6 => /usr/lib/x86_64-linux-gnu/libnettle.so.6 
(0x7fd35b4fe000)
libhogweed.so.4 => /usr/lib/x86_64-linux-gnu/libhogweed.so.4 
(0x7fd35b2c9000)

libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 (0x7fd35b046000)
libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x7fd35ae3d000)




Let me know, and I’ll mail you (privately) login details
I am going to move the data, and I have the whole set of daily pg_dumps 
I need to set it up elsewhere.


--
Björn Lundin
b.f.lun...@gmail.com 







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Fwd: PG12 autovac issues

2020-03-17 Thread Justin King
On Tue, Mar 17, 2020 at 5:39 PM Adrian Klaver  wrote:
>
> On 3/17/20 3:22 PM, Justin King wrote:
> > Apologies, I accidentally sent this to the pgsql-admin list initially
> > but intended it go here:
> >
> > We have a database that isn't overly large (~20G), but gets incredibly
> > frequent updates.  Here's an example table:
> >
> > feedi=# select * from pg_stat_all_tables where schemaname =
> > 'production' and relname = 'tita';
> > relid = 16786
> > schemaname = production
> > relname = tita
> > seq_scan = 23365
> > seq_tup_read = 403862091
> > idx_scan = 26612759248
> > idx_tup_fetch = 19415752701
> > n_tup_ins = 24608806
> > n_tup_upd = 4207076934
> > n_tup_del = 24566916
> > n_tup_hot_upd = 4073821079
> > n_live_tup = 79942
> > n_dead_tup = 71969
> > n_mod_since_analyze = 12020
> > last_vacuum = 2020-03-17 15:35:19.588859+00
> > last_autovacuum = 2020-03-17 21:31:08.248598+00
> > last_analyze = 2020-03-17 15:35:20.372875+00
> > last_autoanalyze = 2020-03-17 22:04:41.76743+00
> > vacuum_count = 9
> > autovacuum_count = 135693
> > analyze_count = 9
> > autoanalyze_count = 495877
> >
> > As you can see in this table, there are only ~80K rows, but billions
> > of updates.  What we have observed is that the frozenxid reaches the
> > 200M mark fairly quickly because of the amount of activity.  What is
> > interesting is that this happens with the 'postgres' and 'template1'
> > databases as well and there is absolutely no activity in those
> > databases.
> >
> > When the 'postgres' and/or 'template1' databases hit the
> > freeze_max_age, there are cases where it kicks off an aggressive
> > autovac of those tables which seems to prevent autovacs from running
> > elsewhere.  Oddly, this is not consistent, but that condition seems to
> > be required.  We have observed this across multiple PG12 servers (dev,
> > test, staging, production) all with similar workloads.
>
> Is there anything in postgres and template1 besides what was created at
> init?

There is nothing in there at all besides system tables created at init.

>
> What are your settings for autovacuum?:
>
> https://www.postgresql.org/docs/12/runtime-config-autovacuum.html

Here are the settings, these are the only ones that are not set to
default with the exception of a few tables that have been overridden
with a different value due to lots of updates and few rows:

autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 8
autovacuum_naptime = 15s
autovacuum_vacuum_threshold = 500
autovacuum_analyze_threshold = 2500
vacuum_cost_limit = 1000

We want fairly aggressive autovacs to keep table bloat limited -- the
application latency suffers if it has to wade through dead tuples and
staying near realtime is important in our environment.

** Also, it should be noted that the autovacuum_analyze_threshold is
probably an incorrect value, we likely intended that to be 250 and
just have now realized it after poking more at the configuration.

>
> >
> > $ grep -i vacuum /var/log/postgresql/postgres.log | cut -b 1-9 | uniq -c
> >   17 Mar 17 06
> >   34 Mar 17 07
> >   31 Mar 17 08
> >   31 Mar 17 09
> >   30 Mar 17 10
> >   34 Mar 17 11
> >   33 Mar 17 12
> >   19 Mar 17 13
> >   40 Mar 17 15
> >   31 Mar 17 16
> >   36 Mar 17 17
> >   34 Mar 17 18
> >   35 Mar 17 19
> >   35 Mar 17 20
> >   33 Mar 17 21
> >
> > As you can see above, we end up having around ~33 autovac/hr, and
> > about 13:30 today, they stopped until we ran a "vacuum freeze verbose
> > analyze;" against the 'postgres' database (around 15:30) which then
> > caused the autovacs to resume running against the "feedi" database.
> >
> > I'm completely perplexed as to what is happening and why it suddenly
> > started when we moved from PG10 > PG12.  The configs and workload are
> > essentially the same between versions.  We realize we could simply
> > increase the autovacuum_freeze_max_age, but that doesn't seem to
> > actually resolve anything -- it just pushes the problem out.  Has
> > anyone seen anything similar to this?
> >
> > Thanks very much for the consideration.
> >
> > Justin King
> > http://flightaware.com/
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com




Re: Fwd: PG12 autovac issues

2020-03-17 Thread Adrian Klaver

On 3/17/20 3:48 PM, Justin King wrote:

On Tue, Mar 17, 2020 at 5:39 PM Adrian Klaver  wrote:


On 3/17/20 3:22 PM, Justin King wrote:

Apologies, I accidentally sent this to the pgsql-admin list initially
but intended it go here:

We have a database that isn't overly large (~20G), but gets incredibly
frequent updates.  Here's an example table:

feedi=# select * from pg_stat_all_tables where schemaname =
'production' and relname = 'tita';
relid = 16786
schemaname = production
relname = tita
seq_scan = 23365
seq_tup_read = 403862091
idx_scan = 26612759248
idx_tup_fetch = 19415752701
n_tup_ins = 24608806
n_tup_upd = 4207076934
n_tup_del = 24566916
n_tup_hot_upd = 4073821079
n_live_tup = 79942
n_dead_tup = 71969
n_mod_since_analyze = 12020
last_vacuum = 2020-03-17 15:35:19.588859+00
last_autovacuum = 2020-03-17 21:31:08.248598+00
last_analyze = 2020-03-17 15:35:20.372875+00
last_autoanalyze = 2020-03-17 22:04:41.76743+00
vacuum_count = 9
autovacuum_count = 135693
analyze_count = 9
autoanalyze_count = 495877

As you can see in this table, there are only ~80K rows, but billions
of updates.  What we have observed is that the frozenxid reaches the
200M mark fairly quickly because of the amount of activity.  What is
interesting is that this happens with the 'postgres' and 'template1'
databases as well and there is absolutely no activity in those
databases.

When the 'postgres' and/or 'template1' databases hit the
freeze_max_age, there are cases where it kicks off an aggressive
autovac of those tables which seems to prevent autovacs from running
elsewhere.  Oddly, this is not consistent, but that condition seems to
be required.  We have observed this across multiple PG12 servers (dev,
test, staging, production) all with similar workloads.


Is there anything in postgres and template1 besides what was created at
init?


There is nothing in there at all besides system tables created at init.



What are your settings for autovacuum?:

https://www.postgresql.org/docs/12/runtime-config-autovacuum.html


Here are the settings, these are the only ones that are not set to
default with the exception of a few tables that have been overridden
with a different value due to lots of updates and few rows:


And those values are?

More below.



autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 8
autovacuum_naptime = 15s
autovacuum_vacuum_threshold = 500
autovacuum_analyze_threshold = 2500
vacuum_cost_limit = 1000


Are either of the below set > 0?:

vacuum_cost_delay

autovacuum_vacuum_cost_delay



We want fairly aggressive autovacs to keep table bloat limited -- the
application latency suffers if it has to wade through dead tuples and
staying near realtime is important in our environment.

** Also, it should be noted that the autovacuum_analyze_threshold is
probably an incorrect value, we likely intended that to be 250 and
just have now realized it after poking more at the configuration.







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Automatic failover

2020-03-17 Thread Ian Barwick

On 2020/03/17 19:30, Sonam Sharma wrote:
> I have setup replication using repmgr. Wanted to know how much time
> the slave node will take to become new primary ?? If any document,
> can someone please share of auto failover.  With automatic failover,
> how much time the slave takes to become new primary .

It's impossible to give a generic answer to this.

Firstly, there will be the interval between the primary failing and whatever
failover system you are using deciding to make a failover decision. This will
be configurable, and will always be a trade-off between speed and certainty.
Secondly it will depend on what work the standby needs to do before it completes
the promotion process. Typically the standby will promote very quickly, but e.g.
if it has a bunch of WAL to replay first, the process will take longer.


Regards

Ian Barwick


--
Ian Barwick   https://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




Re: Mixed Locales and Upgrading

2020-03-17 Thread Michael Paquier
On Tue, Mar 17, 2020 at 10:45:50AM -0400, Tom Lane wrote:
> Don Seiler  writes:
>> What are the ramifications of changing collation like that? Should we
>> consider rebuilding indexes ASAP after that?
> 
> Text indexes would definitely be at risk here.  I'm not really certain
> how bad the problem would be.  Do you have a feeling for how much of
> the data is 100% ASCII?  If you could be sure of that for any given
> column, you wouldn't have to reindex indexes on that column.

There is no way to know how much indexes would get broken without
having a look at it.  Anything ASCII-based should be of no problem.
If you have a doubt, reindexing evey index which includes text column
data is the best course of action in my opinion if you have any
doubts, because that's safe even if it has a higher cost.
--
Michael


signature.asc
Description: PGP signature


How can I set all constraints to be deferrable for a DB/schema

2020-03-17 Thread David Gauthier
Hi:

psql (9.6.7, server 11.3) (linux)

Is there a way to set all constraints on all tables of a DB and/or schema
to be deferrable ?  Or do I have to do them one-by-one ?