PG9.1 migration to PG9.6, dump/restore issues

2018-09-12 Thread Scot Kreienkamp
Hi Everyone,

I am working on a migration from PG9.1 to PG9.6.  Hoping some people can chime 
in on my plans as I am running into some restore issues.

We are upgrading to a new version of PG and migrating to new hardware with RHEL 
7, so I am planning on doing a dump and restore to get moved to the new server. 
 My database is about 300 gigs, not huge but big enough that doing a single 
threaded dump with multi-threaded restore is going to take too much time for 
the window of opportunity I've been given.  I know I can use multi-threaded 
restore on PG9.6 using the custom or directory formats, but PG9.1 only supports 
single threaded dump.  To get around this I'm going to disable all database 
access to the PG9.1 databases, then use the PG9.6 tools to do a multi-threaded 
dump and then multi-threaded restore.

These are the commands I was using:
pg_dump -vj 4 -F d -h $OLDSERVER  $DATABASE -f $BACKUPPATH/DATABASE 
--no-synchronized-snapshots
created $DATABASE
pg_restore -evj 4 -d $DATABASE  $BACKUPPATH/$DATABASE   --disable-triggers

Restore completes successfully, but I noticed that the schema permissions are 
missing, possibly others as well (Is this a known issue?).  So instead, I tried 
backing up and restoring the schema only as single threaded dump and restore, 
then dumping the data multi-threaded using the PG9.6 tools, then doing a 
multi-threaded data-only restore using PG9.6 tools into the already existing 
schema.

These are the commands I'm using now:
pg_dump -sh $OLDSERVER  $DATABASE -f $BACKUPPATH/$DATABASE.schema.sql
pg_dump -vj 4 -F d -h $OLDSERVER  $DATABASE -f $BACKUPPATH/DATABASE 
--no-synchronized-snapshots
createdb $DATABASE
psql -d $DATABASE -f $BACKUPPATH/$DATABASE.schema.sql
pg_restore -evj 4 -d $DATABASE  $BACKUPPATH/$DATABASE   -a --disable-triggers


That seemed to work OK so far, but the missing schema permissions from my first 
try has me spooked.  Are there any problems with attempting this type of 
backup/restore?  Would I be better off using the commands from my first attempt 
and reapplying permissions?  Or is doing a single threaded dump my only option 
to get a good backup?  I have to be able to revert to the old server as this is 
production, so doing in place upgrades are not possible... the original server 
has to remain pristine.


Thanks!



Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162  | * 734-384-6403 | | * 7349151444 
| *  scot.kreienk...@la-z-boy.com<mailto:%7BE-mail%7D>
www<http://www.la-z-boy.com/>.la-z-boy.com<http://www.la-z-boy.com/> | 
facebook.<https://www.facebook.com/lazboy>com<https://www.facebook.com/lazboy>/<https://www.facebook.com/lazboy>lazboy<http://facebook.com/lazboy>
 | twitter.com/lazboy<https://twitter.com/lazboy> | 
youtube.com/<https://www.youtube.com/user/lazboy>lazboy<https://www.youtube.com/user/lazboy>

[cid:lzbVertical_hres.jpg]



This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: PG9.1 migration to PG9.6, dump/restore issues

2018-09-12 Thread Scot Kreienkamp



Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 |  Office: 734-384-6403 |  |  
Mobile: 7349151444 | Email: scot.kreienk...@la-z-boy.com
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Wednesday, September 12, 2018 10:40 AM
> To: Scot Kreienkamp 
> Cc: pgsql-general@lists.postgresql.org
> Subject: Re: PG9.1 migration to PG9.6, dump/restore issues
>
> Scot Kreienkamp  writes:
> > Restore completes successfully, but I noticed that the schema
> > permissions are missing, possibly others as well (Is this a known
> > issue?).
>
> If you're talking about custom modifications you made to the permissions
> of the "public" schema in particular, then yeah, that won't be tracked
> (IIRC, it will be with newer source server versions, but not 9.1).
> Otherwise, no, that's not expected.  Would you provide more detail?
>
Yes, it's permissions on the public schema.  They were completely empty.  I 
didn't check the other schemas as the very first thing I noticed was the 
permissions changed on the public schema, but I believe they were empty as well.

> > These are the commands I'm using now:
> > pg_dump -sh $OLDSERVER  $DATABASE -f
> $BACKUPPATH/$DATABASE.schema.sql
> > pg_dump -vj 4 -F d -h $OLDSERVER  $DATABASE -f
> $BACKUPPATH/DATABASE --no-synchronized-snapshots
> > createdb $DATABASE
> > psql -d $DATABASE -f $BACKUPPATH/$DATABASE.schema.sql
> > pg_restore -evj 4 -d $DATABASE  $BACKUPPATH/$DATABASE   -a --disable-
> triggers
>
> Also note that this recipe does not copy "global" objects (users and
> tablespaces), nor does it restore any database-level properties.
> You'd need to use pg_dumpall to transfer those things automatically.
> (Possibly "pg_dumpall -g" would be a good starting point here.)
>
>   regards, tom lane

I notice the pg_dumpall -g doesn't bring over the schema permissions either.  
The only way I can get them to come over is pg_dumpall -s, which creates the 
databases and tables as well.  I could drop the databases and create empty ones 
to do the restore I guess, it would only take a few extra seconds.




This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: PG9.1 migration to PG9.6, dump/restore issues

2018-09-12 Thread Scot Kreienkamp
> Did you see errors in the restore?
>
> In particular about not finding roles(users) for the permissions?
>
> I ask because I do not see in the above anything about dumping objects
> global to the cluster. That would include roles. I use:
>
> pg_dumpall -g -f globals.sql
>
> See:
>
> https://www.postgresql.org/docs/10/static/app-pg-dumpall.html
>

Nope, no errors in the restore that I could see.  As I as discussing with Tom 
Lane, the -g switch doesn't bring over schema permissions either.  I could use 
the -s switch and just re-create the databases as empty again after that's 
applied.  That brings over everything except data.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 |  Office: 734-384-6403 |  |  
Mobile: 7349151444 | Email: scot.kreienk...@la-z-boy.com

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: PG9.1 migration to PG9.6, dump/restore issues

2018-09-12 Thread Scot Kreienkamp
Thanks Ron, glad to hear it worked and someone was successful at it.  I’m on 
the right path then.



Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Ron [mailto:ronljohnso...@gmail.com]
Sent: Wednesday, September 12, 2018 10:40 AM
To: pgsql-general@lists.postgresql.org
Subject: Re: PG9.1 migration to PG9.6, dump/restore issues

On 09/12/2018 08:55 AM, Scot Kreienkamp wrote:

Hi Everyone,

I am working on a migration from PG9.1 to PG9.6.  Hoping some people can chime 
in on my plans as I am running into some restore issues.

We are upgrading to a new version of PG and migrating to new hardware with RHEL 
7, so I am planning on doing a dump and restore to get moved to the new server. 
 My database is about 300 gigs, not huge but big enough that doing a single 
threaded dump with multi-threaded restore is going to take too much time for 
the window of opportunity I’ve been given.  I know I can use multi-threaded 
restore on PG9.6 using the custom or directory formats, but PG9.1 only supports 
single threaded dump.  To get around this I’m going to disable all database 
access to the PG9.1 databases, then use the PG9.6 tools to do a multi-threaded 
dump and then multi-threaded restore.

These are the commands I was using:
pg_dump -vj 4 -F d -h $OLDSERVER  $DATABASE -f $BACKUPPATH/DATABASE 
--no-synchronized-snapshots
created $DATABASE
pg_restore -evj 4 -d $DATABASE  $BACKUPPATH/$DATABASE   --disable-triggers


This is almost exactly what I did when migrating from 8.4 to 9.6.  As Adrian 
Klaver mentioned, you need to dump the globals and then run that script on the 
new database.

No need to disable triggers, since it's "relevant only when performing a 
data-only restore", and you aren't doing a data-only restore.  Besides, 
pg_restore adds all that metadata -- including PKs, FKs, indexes, etc. to the 
db *after* the data is loaded.

--
Angular momentum makes the world go 'round.


This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: Pgbouncer discard all

2018-10-16 Thread Scot Kreienkamp
Are you sure they’re actually waiting?  Don’t forget 10.5 will show the last 
query executed even if the connection is idle.  I believe discard all would be 
the last command the pgbouncer would send to the database when the client is 
done as it resets the connection for the next client.  So what you’re 
describing would seem to be expected behavior.

Try this to see if the queries are actually waiting:

select * from pg_stat_activity where wait_event_type is not null or wait_event 
is not null;




Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Nicola Contu [mailto:nicola.co...@gmail.com]
Sent: Tuesday, October 16, 2018 8:12 AM
To: pgsql-general@lists.postgresql.org
Cc: Alessandro Aste 
Subject: Re: Pgbouncer discard all

Hello,
is this normal? can anyone help?

Thanks a lot for your help in advance.

Nicola

Il giorno mer 10 ott 2018 alle ore 17:03 Nicola Contu 
mailto:nicola.co...@gmail.com>> ha scritto:
Hello,
we are running pgbouncer 1.9.1 connected to postgres 10.5

Sometimes we are seeing a lot of waiting connections with this query :

DISCARD ALL

This is our pgbouncer config :

[databases]
dev = host=10.10.10.1 port=5432 dbname=dev auth_user=pgbouncer pool_size=120

[pgbouncer]
listen_port = 6543
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/users.txt
auth_query = select uname,phash from user_lookup($1)
logfile = /var/log/pgbouncer.log
pidfile = /home/postgres/pgbouncer.pid
admin_users = admin
user = postgres
max_db_connections = 220
log_connections = 0
log_disconnections = 0


Do you think this can depend on the server_idle_timeout default config value?

Thanks a lot,
Nicola

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: Pgbouncer discard all

2018-10-16 Thread Scot Kreienkamp
If they are visible in pg_stat_activity then yes.  Likewise, if they are only 
visible in pgbouncer, then no.

The last query being discard all means that PGBouncer has returned the 
connection to the pool to make it available to the next client that needs it.  
So what you’re seeing sounds to me like expected behavior.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Nicola Contu [mailto:nicola.co...@gmail.com]
Sent: Tuesday, October 16, 2018 11:12 AM
To: martin.marq...@2ndquadrant.com
Cc: Scot Kreienkamp ; 
pgsql-general@lists.postgresql.org; Alessandro Aste 
Subject: Re: Pgbouncer discard all

 2492534808 | dev|   7355 | 1833427130 | pgbouncer   |  
   | 10.151.2.145   | |   60570 | 2018-10-16 
14:13:05.151015+00 |   | 2018-10-16 
15:10:40.309993+00 | 2018-10-16 15:10:40.310038+00 | Client
  | ClientRead  | idle   | |  | DISCARD ALL

They are idle actually.
Will they count as client connection on the total amount set on the 
postgres.conf?

Il giorno mar 16 ott 2018 alle ore 16:22 Martín Marqués 
mailto:martin.marq...@2ndquadrant.com>> ha 
scritto:
El 16/10/18 a las 09:59, Scot Kreienkamp escribió:
> Are you sure they’re actually waiting?  Don’t forget 10.5 will show the
> last query executed even if the connection is idle.  I believe discard
> all would be the last command the pgbouncer would send to the database
> when the client is done as it resets the connection for the next
> client.  So what you’re describing would seem to be expected behavior.

He might have been referring to client waiting. That is visible in the
pgbouncer pseudo-database

OTOH if the the waiting is seen in pg_stat_activity, then pgbouncer has
nothing to do. The connection has already been assigned to the client
and the waiting is happening on the database server, not the pooler.

Regards,

--
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


Replication question

2018-10-22 Thread Scot Kreienkamp
Hi everyone,

We just moved to PG9.6 from 9.1 (yeah, not my choice to wait this long).  In 
9.1 I had to make the archive location (NFS in my case) available to all the 
mirrors running PG so that they could catch up whenever they fell behind.  I 
thought I read somewhere that in 9.6, as long as the WAL log is available on 
disk or in the archive the replication server will provide that to the 
replication client, and my archive NFS mount didn't have to be available to all 
replication clients.  It doesn't seem to be operating that way though.  Did I 
completely remember that wrong or did I misunderstand something?



Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162  | * 734-384-6403 | | * 7349151444 
| *  scot.kreienk...@la-z-boy.com<mailto:%7BE-mail%7D>
www<http://www.la-z-boy.com/>.la-z-boy.com<http://www.la-z-boy.com/> | 
facebook.<https://www.facebook.com/lazboy>com<https://www.facebook.com/lazboy>/<https://www.facebook.com/lazboy>lazboy<http://facebook.com/lazboy>
 | twitter.com/lazboy<https://twitter.com/lazboy> | 
youtube.com/<https://www.youtube.com/user/lazboy>lazboy<https://www.youtube.com/user/lazboy>

[cid:lzbVertical_hres.jpg]



This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: max_connections parameter: too_many_connections error

2019-09-17 Thread Scot Kreienkamp
1. How can we decide on optimal value for max_connections for a given 
setup/server? I checked many posts saying that even 1000 is considered as a 
very high value but we are hitting the error too_many_connections due to 
Max_connections value limit.



I have one set at 1000 but I usually top out around 500 right now.  As long as 
you have the CPU and memory to handle whatever you’ve set I would think it’s 
ok.  Hopefully others with more experience with big hardware will chime in.



2. We usee hikari pool on the client side but that even has limitations. 
Implementing pg_bouncer is another suggestion in the posts. In a HA setup, do 
we have to set it up on both primary standby?



Client side pooling is a different proposition than server side.  Each server 
maintaining its own pool VS the server maintaining a common pool, the server 
side pooling would lower your connection count.  The value of pooling also 
depends on your app.  If it’s maintaining persistent connections for hours at a 
time then any kind of pooling is going to be of limited value depending on how 
the pooler is operating, IE connection pooling VS transactional pooling.If 
your transactions are simple you could use transactional pooling which could 
greatly reduce the connection count.



For an HA setup you would need to set it up on both PG nodes.  If you’re using 
the standby for read-only queries then it would have to be running on both 
nodes at all times and pacemaker would start/stop it as part of the failover 
process.  If you’re only using the active node then pacemaker would have to 
start the pooler on the active node and stop it on the passive node, and again 
control it on both nodes for failover.  If it’s that large of a setup you may 
want to make the pooler its own cluster with pacemaker to relieve the PG 
cluster of the additional load.

Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162 | • 734-384-6403 | |  • 7349151444  
| • scot.kreienk...@la-z-boy.com
www.la-z-boy.com<http://www.la-z-boy.com>  | 
facebook.com/lazboy<http://facebook.com/lazboy>  | 
twitter.com/lazboy<http://twitter.com/lazboy> | 
youtube.com/lazboy<http://youtube.com/lazboy>
[cid:4C-lzbVertical_9ddbc47c-2ac7-4ab5-9162-d7bc17d5d136.jpg]

This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information. If you have 
received this communication in error, please notify us immediately by e-mail or 
by telephone at the above number. Thank you.


RE: psql crash on 9.6.16

2020-03-16 Thread Scot Kreienkamp


I don't have an answer, just some questions:   
No problem, wasn't looking for an answer, just trying to help by reporting it.  


1) OS and version of same?   
RHEL7.7

2) Where you using the 9.6 version of psql?   
Yes, 9.6.16 client and server, installed via RPM from the official repo 
packages.

3) What was the connection string that you supplied to psql?  
No connection string, so connected via socket.  Psql -d rms.   



Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162 | Office: 734-384-6403 | Fax:  | 
Mobile: 7349151444 | E-mail: scot.kreienk...@la-z-boy.com

?
?

This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information. If you have 
received this communication in error, please notify us immediately by e-mail or 
by telephone at the above number. Thank you. 




RE: psql crash on 9.6.16

2020-03-16 Thread Scot Kreienkamp


Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162 | Office: 734-384-6403 | Fax:  | 
Mobile: 7349151444 | E-mail: scot.kreienk...@la-z-boy.com

?
?
-Original Message-
From: Tom Lane  
Sent: Monday, March 16, 2020 11:40 AM
To: Scot Kreienkamp 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: psql crash on 9.6.16

ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.

Scot Kreienkamp  writes:
> I was using psql to connect to the local database for a simple update this 
> morning and it crashed.  Here's what was printed out after the crash.  I was 
> not able to re-create it.

Hm, the backtrace says the problem is somewhere inside libreadline,
which means it has nothing to do with any SQL-level concepts, but
rather with editing your input text.  Did you do anything unusual
while typing that line of input --- say, recall it from history,
or edit it in a way that you don't usually do, or even mistype
some control character you don't usually type?

Sadly, the visible evidence here doesn't tell us much of anything
about what provoked the crash :-(

FWIW, there's a pretty good chance that this isn't psql's fault
per se, but a libreadline bug.  You might check to see if you can
get a newer readline version installed.

    regards, tom lane

[Scot Kreienkamp] 
Nothing unusual that I can think of.  I typed it in by hand, the update 
succeeded, and I was able to recall it and run it again from history with no 
crash.  

This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information. If you have 
received this communication in error, please notify us immediately by e-mail or 
by telephone at the above number. Thank you. 




RE: Replication question

2018-10-22 Thread Scot Kreienkamp
Dang, I thought that sounded too good to be true.  Oh well.

Thanks for setting me straight.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Don Seiler [mailto:d...@seiler.us]
Sent: Monday, October 22, 2018 9:58 AM
To: Scot Kreienkamp 
Cc: Postgres General 
Subject: Re: Replication question

I thought I read somewhere that in 9.6, as long as the WAL log is available on 
disk or in the archive the replication server will provide that to the 
replication client, and my archive NFS mount didn’t have to be available to all 
replication clients.

Streaming replication will only read from the WAL files in the $PGDATA/pg_xlog 
directory. It will not read from archives. So, yes, you would need your NFS 
mount on the replica (or otherwise copy the archive files to the replica).

Don.

--
Don Seiler
www.seiler.us<http://www.seiler.us>

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: Replication question

2018-10-22 Thread Scot Kreienkamp
I remember thinking it was pulling from archive with the restore command if 
necessary to augment what it had on disk.  If that was the case I wanted to 
configure it.  I don’t care for the replication slots due to the possible disk 
space issue as we don’t run shifts around the clock.  So I’ll have to mount the 
archive via NFS like I had before, not a big deal.

As an alternative to NFS I was thinking about making the archives available via 
HTTPD and using wget or curl in my script instead of a copy from NFS.  That 
seems like it would work better from the remote sites.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Jeff Janes [mailto:jeff.ja...@gmail.com]
Sent: Monday, October 22, 2018 11:43 AM
To: Scot Kreienkamp 
Cc: Postgres General 
Subject: Re: Replication question


On Mon, Oct 22, 2018, 9:54 AM Scot Kreienkamp 
mailto:scot.kreienk...@la-z-boy.com>> wrote:
Hi everyone,

We just moved to PG9.6 from 9.1 (yeah, not my choice to wait this long).  In 
9.1 I had to make the archive location (NFS in my case) available to all the 
mirrors running PG so that they could catch up whenever they fell behind.  I 
thought I read somewhere that in 9.6, as long as the WAL log is available on 
disk or in the archive the replication server will provide that to the 
replication client, and my archive NFS mount didn’t have to be available to all 
replication clients.  It doesn’t seem to be operating that way though.  Did I 
completely remember that wrong or did I misunderstand something?

The master won't read from the archives for you in order to send to an replica. 
 But using replication slots, you can keep the needed log files right in 
pg_xlog/pg_wal until all replicas get what they need (assuming the disk is 
large enough).  Then you don't need an archive at all for replication purposes, 
still might for pitr purposes.

Perhaps this is what you heard about.

Cheers,

Jeff



This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: How to change standby node to sync from the new master without rebooting the PostgreSQL service?

2018-10-30 Thread Scot Kreienkamp
Point it at a VIP that travels with the master.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Madan Kumar [mailto:madankumar1...@gmail.com]
Sent: Tuesday, October 30, 2018 7:20 AM
To: pgsql-general@lists.postgresql.org
Subject: How to change standby node to sync from the new master without 
rebooting the PostgreSQL service?

Hi,

Whenever there is a change in master, PostgreSQL service on standby nodes must 
be restarted (after changing the master IP in the recovery.conf) to ensure it 
is syncing with the new master.
Is there a way to point to new master without reboot of PostgreSQL on the 
standby?

Warm Regards,
Madan Kumar K<https://about.me/madankumark>

"There is no Elevator to Success. You have to take the Stairs"

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: How to change standby node to sync from the new master without rebooting the PostgreSQL service?

2018-10-30 Thread Scot Kreienkamp
Why is it not feasible?  How do your DB clients know to switch to the new 
master?

I’m using pcs clustering in my environment to manage two production nodes, 
automatic failover, and two VIPs (one for master, one for slave).  All my 
clients point at either the master VIP or the slave VIP.  When we have a role 
change where the master is moved we do nothing to any clients or replication 
slaves, they automatically reconnect in all cases after the pcs cluster 
activates the two VIPs.  Also in case the primary slave would go down pcs will 
move the primary slave VIP to the master so that replication continues.  Makes 
maintenance easy, I can reboot the primary slave at any time and everything 
just continues working.

There is easy to use open source software that will do nothing but VIPs.  I 
used to use keepalived for that purpose.  You define a script that will let it 
determine which node to activate the VIP on.  In the script have it check which 
node is the master, and it will activate that VIP on the master.  When you 
transition the master to another server the VIP will travel with the master.



Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Madan Kumar [mailto:madankumar1...@gmail.com]
Sent: Tuesday, October 30, 2018 10:02 AM
To: Scot Kreienkamp 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: How to change standby node to sync from the new master without 
rebooting the PostgreSQL service?

Thanks Scot.
But moving VIP is not feasible option for me.
At present PostgreSQL doesn't support for reloading of recovery.conf parameters 
via SIGHUP. To prevent recovery.conf reload for master IP, I can manage 
internal DNS to always point to the current master. However there are some 
cases where old master will come up as standby before the new master is 
elected. In this case it will lead to cascading replication.

So to overcome such cases reboot is a required. It can be achieved by 
restarting the wal receiver process too. But there is no straight forward way 
of restarting wal receiver process. The only way i figured out is to kill the 
wal receiver process. Postmaster will take care of restarting the wal receiver  
process. But here my worry is, will there be any side effect if i kill wal 
receiver process (even using TERM signal)?

Warm Regards,
Madan Kumar K<https://about.me/madankumark>

"There is no Elevator to Success. You have to take the Stairs"


On Tue, Oct 30, 2018 at 6:27 PM Scot Kreienkamp 
mailto:scot.kreienk...@la-z-boy.com>> wrote:
Point it at a VIP that travels with the master.
From: Madan Kumar 
[mailto:madankumar1...@gmail.com<mailto:madankumar1...@gmail.com>]
Sent: Tuesday, October 30, 2018 7:20 AM
To:pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: How to change standby node to sync from the new master without 
rebooting the PostgreSQL service?

Hi,

Whenever there is a change in master, PostgreSQL service on standby nodes must 
be restarted (after changing the master IP in the recovery.conf) to ensure it 
is syncing with the new master.
Is there a way to point to new master without reboot of PostgreSQL on the 
standby?

Warm Regards,
Madan Kumar K<https://about.me/madankumark>

"There is no Elevator to Success. You have to take the Stairs"

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


Cascading replication with slots

2018-10-30 Thread Scot Kreienkamp
Hi everyone,

I've never used replication slots before so I'm just wanting to clarify how 
they work on a two node streaming replication cluster.


1.  From the documentation both active and standby nodes seem to be aware 
of the status of the replication slot... is that true?

2.  Any limitations on using it with cascading replication?  For instance, 
can I setup a replication slot on a standby, then replicate using that slot 
from another standby?

3.  Or can I only replicate from the master when using replication slots?


Thanks!

Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162  | * 734-384-6403 | | * 7349151444 
| *  scot.kreienk...@la-z-boy.com<mailto:%7BE-mail%7D>
www<http://www.la-z-boy.com/>.la-z-boy.com<http://www.la-z-boy.com/> | 
facebook.<https://www.facebook.com/lazboy>com<https://www.facebook.com/lazboy>/<https://www.facebook.com/lazboy>lazboy<http://facebook.com/lazboy>
 | twitter.com/lazboy<https://twitter.com/lazboy> | 
youtube.com/<https://www.youtube.com/user/lazboy>lazboy<https://www.youtube.com/user/lazboy>

[cid:lzbVertical_hres.jpg]



This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


help with aggregation query across a second text array column

2018-11-12 Thread Scot Kreienkamp

Hi everyone,


I had a dataset like so:

Servername|Primary class
--
Server1| retail
Server2| dmz
Server3 | NA


And I used this query to summarize it for automated tools:
select environment|| ':' || string_agg(name,',') from servers group by 
environment order by environment;


Now my dataset has changed to:
Servername text|Primary class text|Secondary class text[1d array]

Server1| retail |['dmz']



There can be multiple classes for secondary but they will be the same classes 
as the primary classes.  Input is controlled via web pages so that should be 
guaranteed.  For instance, servers that have primary class of SQL but also 
belong to an application class.  Now I need to change my summary query to 
include all the servers for each class taking into account the secondary class 
column and I'm completely baffled on how to do so with the array.  I know the 
any trick and use it to match against the array when querying for specific 
primary and secondary classes but I can't figure out how to generate the 
listing the same as the old summary query I was using.


Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162  | * 734-384-6403 | | * 7349151444 
| *  scot.kreienk...@la-z-boy.com<mailto:%7BE-mail%7D>
www<http://www.la-z-boy.com/>.la-z-boy.com<http://www.la-z-boy.com/> | 
facebook.<https://www.facebook.com/lazboy>com<https://www.facebook.com/lazboy>/<https://www.facebook.com/lazboy>lazboy<http://facebook.com/lazboy>
 | twitter.com/lazboy<https://twitter.com/lazboy> | 
youtube.com/<https://www.youtube.com/user/lazboy>lazboy<https://www.youtube.com/user/lazboy>

[cid:lzbVertical_hres.jpg]



This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: help with aggregation query across a second text array column

2018-11-12 Thread Scot Kreienkamp
Thank you very much Rob, that concept worked out nicely.  I would never have 
thought of unioning the table to itself with unnest.

Here's my final query:

   select environment ||':' || string_agg(name, ',')
  from (
select name,environment from servers union select name,unnest(auditenvironment) 
as environment from servers order by name) t
group by environment order by environment;

Cheers!


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 |  Office: 734-384-6403 |  |  
Mobile: 7349151444 | Email: scot.kreienk...@la-z-boy.com
> -Original Message-
> From: Rob Nikander [mailto:rob.nikan...@gmail.com]
> Sent: Monday, November 12, 2018 10:30 AM
> To: Postgres General 
> Cc: Scot Kreienkamp 
> Subject: Re: help with aggregation query across a second text array column
>
>
>
> > On Nov 12, 2018, at 9:40 AM, Scot Kreienkamp  boy.com> wrote:
> > …
>
> I’m not too confident in my answer here (there could be a better way), but 
> this
> might help. You could use the `unnest` function to transform the array into
> multiple rows. For example, given a table like
>
> create table t1 (env text, cls text, cls2 text[]);
>
> I can query it like:
>
> select env, string_agg(cls, ‘,’)
>   from (select env, cls from t1
>union
>select env, unnest(cls2) from t1) t
> group by env;
>
> Rob

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: Query help

2019-01-01 Thread Scot Kreienkamp
Any columns that aren’t involved in a summary operation (think math type or 
some other type of summary operation) have to be in the group by statement.

From what you show below, I would try something like this (untested):

Select accountid,name,sum(amount) from table where sum(amount) >’50’ group by 
accountid,name sort by accountid,name;

You can’t show the transaction ID unless you have duplicate transaction ID’s 
that you wanted to group by.  If you did try to show it you’d get the entire 
table.  Or you could use a more advanced query to gather the multiple 
transaction ID’s into a single record for the query results which would let the 
sum and group by work.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Chuck Martin [mailto:clmar...@theombudsman.com]
Sent: Tuesday, January 01, 2019 2:06 PM
To: pgsql-general 
Subject: Query help

Sorry if this is too basic a question for this list, but I don't fully get how 
to use aggregates (sum()) and group-by together. I'm trying to get a list of 
transactions where the total for a given account exceeds a given number. I'm 
not sure an example is needed, but if so, consider this simplified data:

accountid.   name
1  bill
2. james
3  sarah
4  carl

transaction
id. amount.  accountid. name
1.  50.  1   bill
2.  25.  2   james
3   35   4   carl
4.  75.  1   bill
5   25.  1   bill
6   50   3   sarah

results wanted-all transactions where account total >= 50

id. amount.  accountid.name
1.  50.  1   bill
3.  75.  1   bill
4   25.  1   bill
5   50   3   sarah

I've tried to understand how to use GROUP BY and HAVING, but the penny won't 
drop. I keep getting errors saying that all columns in the SELECT have to also 
be in the GROUP BY, but nothing I've done seems to produce the correct results. 
I think because the GROUP BY contains multiple columns, so each row is treated 
as a group. It also is difficult to parse out since in the real world, many 
more tables and columns are involved.

Chuck Martin
Avondale Software

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: postgres operational

2019-01-09 Thread Scot Kreienkamp
The best way I came up with for older versions is:If timeout -s 9 10 psql 
-d DBNAME -c "select 1" >/dev/null ; then 

And on newer versions, use the pg_isready command.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 |  Office: 734-384-6403 |  |  
Mobile: 7349151444 | Email: scot.kreienk...@la-z-boy.com
> -Original Message-
> From: Steve Clark [mailto:steve.cl...@netwolves.com]
> Sent: Wednesday, January 9, 2019 12:59 PM
> To: pgsql 
> Subject: postgres operational
>
> Hi List,
>
> Is there a sure fire way to tell if postgres server is up an operational. I 
> was
> testing to see if the
> socket at /tmp/.s.PGSQL.5432 existed - but I ran into a recent problem on
> CentOS 7.5, postgresql 9.2.24, where the
> socket was there but my script couldn't read from my database yet.
>
> Thanks,
> Steve
> --
>


This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: pg_dump on a standby for a very active master

2019-02-12 Thread Scot Kreienkamp
How about pausing replication while you’re running the backup?  I have a mirror 
dedicated to backups, it pauses replication by cron job every night before the 
backup, then resumes midday after I’ve had enough time to find out if the 
backup was successful.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Arjun Ranade [mailto:ran...@nodalexchange.com]
Sent: Tuesday, February 12, 2019 11:33 AM
To: pgsql-general@lists.postgresql.org
Subject: pg_dump on a standby for a very active master


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
I have a Production machine which is having objects dropped/created/truncated 
at all hours of the day (Read: No zero activity window).  I have multiple 
standbys (repmgr streaming replication) for this machine including a cascading 
standby.  Each night, I am attempting to take a logical backup on the standby 
databases via pg_dump of key schemas.

Recently, due to the activity on the primary, pg_dump is failing on the standby 
usually with "ERROR:  could not obtain lock on relation."

I've had the following settings set in postgresql.conf which gave me successful 
backups for a while:

hot_standby = on# "off" disallows queries during 
recovery
max_standby_archive_delay = -1  # max delay before canceling queries
max_standby_streaming_delay = -1# max delay before canceling queries
hot_standby_feedback = on   # send info from standby to prevent
wal_receiver_timeout = 300s # time that receiver waits for
I have it set up this way because I don't mind any replication lag on the 
standbys during the logical backup.  However, recently logical backups have 
been failing either due to a table dropped/truncated on the master.

Also, I use pg_dump with the parallel option in directory format.  However, 
even single threaded pg_dump fails when a table is truncated on the primary.

Is there any way to guarantee consistent logical backups on a standby server 
with a master that has constant DDL/activity?

I am on Postgres 10.3; RHEL 7; 128gb RAM

Thanks,
Arjun


This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: automated refresh of dev from prod

2019-02-27 Thread Scot Kreienkamp
My method is complex and not so good for newbies, but it is incredibly fast and 
should scale to almost any size database.  Mine are not nearly as large though.

I use two methods... the normal backup/restore for longer lived development 
environments, and for shorter lived environments I use postgres native 
mirroring from a secondary prod server to all my dev environments, then use LVM 
snapshots to take a snapshot of the postgres mount.  Mount the snapshot and 
startup a second postgres instance in it and you have a mirror of production 
ready for use.  That only lasts for a finite amount of time though (until you 
fill the space dedicated to the snapshot) before it becomes unusable,
that's the downside... it can't be long lived (hours, 1-2 days maybe).  The 
upside is that the refresh from production in my environment for a 400G 
database is 3 seconds.  It is a trade-off and not fit for every use, that's why 
we also use the traditional backup/restore in some cases.



Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Julie Nishimura [mailto:juliez...@hotmail.com]
Sent: Wednesday, February 27, 2019 4:16 PM
To: pgsql-gene...@postgresql.org
Subject: automated refresh of dev from prod


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Hello everybody, I am new to postgresql environment, but trying to get up to 
speed.
Can you please share your experience on how you can automate refreshment of dev 
environment on regular basis (desirably weekly), taking for consideration some 
of prod dbs can be very large (like 20+ TB

Any suggestions?

Thank you!

This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


RE: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Scot Kreienkamp
Replication and several other admin type operations must connect directly to 
PG.  They are not supported through PGBouncer.

From: Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
Sent: Monday, April 8, 2019 9:21 AM
To: pgsql-general@lists.postgresql.org
Subject: Getting error while running the pg_basebackup through PGBOUNCER


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Hi All,

We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port 5433.

Postgres database port number is 6433. By using port 5433 PGBOUNCER is 
connecting to postgres port 6433 database.

Now PGBOUNCER is establishing the connections properly but when I try to run 
the pg_basebackup through port 5433(PGBOUNCER port) we are receiving below 
error. Please guide me.


 /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar 
--gzip --compress=1 --pgdata=- -D /opt/rao

pg_basebackup: could not connect to server: ERROR:  Unsupported startup 
parameter: replication


--
Regards,
Raghavendra Rao J S V


RE: Getting error while running the pg_basebackup through PGBOUNCER

2019-04-08 Thread Scot Kreienkamp
Basically anything that is not written as a sql query should be connected 
directly to PG.  PGBouncer is really only meant for SQL query type connections.

From: Raghavendra Rao J S V [mailto:raghavendra...@gmail.com]
Sent: Monday, April 8, 2019 10:19 AM
To: Scot Kreienkamp 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Getting error while running the pg_basebackup through PGBOUNCER


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Thank you very much for your prompt response.

Could you explain other admin type operations, which are not supported by 
pgbouncer?

Regards,
Raghavendra Rao.

On Mon, 8 Apr 2019 at 19:16, Scot Kreienkamp 
mailto:scot.kreienk...@la-z-boy.com>> wrote:
Replication and several other admin type operations must connect directly to 
PG.  They are not supported through PGBouncer.

From: Raghavendra Rao J S V 
[mailto:raghavendra...@gmail.com<mailto:raghavendra...@gmail.com>]
Sent: Monday, April 8, 2019 9:21 AM
To: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Subject: Getting error while running the pg_basebackup through PGBOUNCER


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Hi All,

We are using PGBOUNCER(connection pool mechanisam). PGBOUNCER uses port 5433.

Postgres database port number is 6433. By using port 5433 PGBOUNCER is 
connecting to postgres port 6433 database.

Now PGBOUNCER is establishing the connections properly but when I try to run 
the pg_basebackup through port 5433(PGBOUNCER port) we are receiving below 
error. Please guide me.


 /opt/postgres/9.2/bin/pg_basebackup -p 5433 -U postgres -P -v -x --format=tar 
--gzip --compress=1 --pgdata=- -D /opt/rao

pg_basebackup: could not connect to server: ERROR:  Unsupported startup 
parameter: replication


--
Regards,
Raghavendra Rao J S V


--
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


RE: Streaming Replication

2019-04-22 Thread Scot Kreienkamp
Double check all the info is correct on the primary_conninfo line:

primary_conninfo = 'host=primary host port=5432 user=replication 
password=replication'

And check your logs.  It probably says something like cannot connect to host 
primary.

From: Daulat Ram [mailto:daulat@exponential.com]
Sent: Monday, April 22, 2019 5:54 AM
To: pgsql-general@lists.postgresql.org
Subject: Streaming Replication


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
Hello Team,

I am setting a streaming replication by using two different host there is no 
output of select * from pg_stat_replication; I have set the parameters on both 
side.

Host names are :

(10.29.15.244)
(10.29.15.25)


postgres=# select * from pg_stat_replication;
pid | usesysid | usename | application_name | client_addr | client_hostname | 
client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | 
flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | 
sync_state
-+--+-+--+-+-+-+---+--+---+--+---+---++---+---++---+
(0 rows)



bash-4.4$ ps aux | grep 'postgres.*rec'
  121 postgres  0:00 grep postgres.*rec
bash-4.4$


Parameters on primary are:
-
wal_level = hot_standby
max_wal_senders = 50
wal_keep_segments = 16
max_replication_slots = 16
vacuum_defer_cleanup_age = 4


Parameters on Standby are:
--
max_standby_archive_delay = 30s
max_standby_streaming_delay = 30s
archive_mode = on
archive_timeout = 1800
hot_standby = on

Recovery.conf @ standby:

standby_mode = on
primary_conninfo = 'host=primary host port=5432 user=replication 
password=replication'
trigger_file = '/tmp/touch_me_to_promote_to_me_master'


Hba.conf @primary

hostall all0.0.0.0/0  md5
hostreplication replication samenet md5
hostreplication postgres,kbcn,replication  10.29.0.0/16   md5
hostall kbcn,nagios,postgressamenet md5
hostall postgres0.0.0.0/0  md5
hostall kbcn,nagios,postgres10.29.0.0/16  md5
hostnossl   replication replication,postgres172.17.0.0/16   
md5
hostnossl   replication replication,postgres10.29.0.0/16   
md5
hostnossl   replication replication,postgres10.29.15.25/32
md5

hba.conf @standby


hostall all0.0.0.0/0  md5
hostreplication replication samenet md5
hostreplication postgres,kbcn,replication  10.29.0.0/16   md5
hostall kbcn,nagios,postgressamenet md5
hostall postgres0.0.0.0/0  md5
hostall kbcn,nagios,postgres10.29.0.0/16  md5
hostnossl   replication replication,postgres172.17.0.0/16   
md5
hostnossl   replication replication,postgres10.29.0.0/16   
md5


Please suggest what I have missed.

Regards,
Daulat



Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | Fax: | 
Mobile: 7349151444 | Email: scot.kreienk...@la-z-boy.com


RE: Disk Groups/Storage Management for a Large Database in PostgreSQL

2024-01-23 Thread Scot Kreienkamp
El lun, 22 ene 2024 18:44, Amit Sharma 
mailto:amitpg...@gmail.com>> escribió:
Hi,

We are building new VMs for PostgreSQL v15 on RHEL 8.x For a large database of 
15TB-20TB.

I would like to know from the experts that is it a good idea to create LVMs to 
manage storage for the database?

Or are there any other better options/tools for disk groups in PostgreSQL, 
similar to ASM in Oracle?

Thanks
Amit

Simple question that requires a somewhat more complex answer. There are 
actually 3 metrics to consider:

1) Capacity
Your database doesn't fit on a single disk, so you need to distribute your data 
across several disks. LVM would indeed be an option (as well as ZFS or RAID 
disk arrays)

2) Safety
If you loose 1 disk, your data is at risk, as you're likely to loose all tables 
partially loaded on that disk. LVM is still an option as long as it is 
configured on a RAID array. ZFS can do that natively.

3) Performance
Oracle ADM ensures performance by automatically controlling the distribution of 
the tables. I would need to see on a real case how it is actually done. For 
sure, LVM and ZFS won't have this type of granularity.

On the other hand, you can distribute your data in table partitions to help 
this distribution. It is not automatic but will surely help you to distribute 
your workload.


As he is building VM’s I’m assuming the hardware level has all the redundancy 
for RAID/ZFS/etc.  If that is the case then you don’t want to run RAID/ZFS/etc 
on top of that, let the hardware do its thing.  If my assumption is wrong then 
ignore everything I’m saying.

One thing I found that helps with speed of reads/writes… you can spread your 
read/write load across multiple SCSI controllers/disks using LVM.  For example, 
I’m assuming VMWare which allows 4 SCSI controllers.  Set the OS disk on SCSI 
controller 0, then spread your database disks in sets of 3 across SCSI 
controllers 1-3, IE 3 disks of 5TB each, one on each SCSI controller.  Then 
when you create your LVM partition specify the option to stripe it with 3 
stripes.  That gives you a setup where you are multiplexing reads/writes across 
all 3 SCSI controllers and disks instead of bottlenecking them all through 1 
SCSI controller and disk at a time.

Scot Kreienkamp | Applications Infrastructure Architect | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162 | • (734) 384-6403 | |  • 
1-734-915-1444  | • scot.kreienk...@la-z-boy.com
www.la-z-boy.com<http://www.la-z-boy.com>  | 
facebook.com/lazboy<http://facebook.com/lazboy>  | 
twitter.com/lazboy<http://twitter.com/lazboy> | 
youtube.com/lazboy<http://youtube.com/lazboy>
[cid:smallerlzbonlylogoforsign_b8ca06bf-75b5-4619-8093-c9418c455597.png]

This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information. If you have 
received this communication in error, please notify us immediately by e-mail or 
by telephone at the above number. Thank you.


RE: Disk Groups/Storage Management for a Large Database in PostgreSQL

2024-01-23 Thread Scot Kreienkamp
Thanks Olivier and Scot for your inputs!

Another data point I would like to share is that VMs will be built in Azure 
Cloud with Azure Managed Storage and Locally redundant storage (LRS) option 
with a remote DR as well.

 LVM or ZFS would still be a good option to allow easy storage/disk management 
like add, resize or remove disks while PostgreSQL services are up?
Is equal data distribution a challenge on LVM/ZFS disks?


Thanks
Amit



I would not call data distribution a challenge, but something to be aware of.  
If LVM has 3 disks in a pool it will, by default, use all of disk 1, then use 
all of disk 2, then use all of disk 3.  The reason for that is with the default 
you can add new disks one at a time.  With striping you must add new disks 
equal to the number of stripes.  Either way I would still advise use of LVM.

Scot Kreienkamp | Applications Infrastructure Architect | La-Z-Boy Corporate
One La-Z-Boy Drive | Monroe, Michigan 48162 | • (734) 384-6403 | |  • 
1-734-915-1444  |  Email: scot.kreienk...@la-z-boy.com



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information. If you have 
received this communication in error, please notify us immediately by e-mail or 
by telephone at the above number. Thank you.