Re: More than one Cluster on single server (single instance)

2022-07-21 Thread Daulat
Thanks for your inputs.

We are thinking about this approach to consolidate some small applications
of individual clients and to perform administration (data/log/pgbackrest
backups etc.) separately which is not possible on a single cluster.

We have tried to create and start the cluster with different port 5433 on
separate data directory but getting some errors.

-I replaced the default port with 5433 in the postgresql.conf file.

-We are not using Ubuntu , hence, I think we can’t use the
pg_cluster

./initdb -D /opt/PostgreSQL-10/whr_data1_5433 -U postgres -W

./pg_ctl -D /opt/PostgreSQL-10/whr_data1_5433 -l
/opt/PostgreSQL-10/whr_dlog/start.log start



Error while creating Cluster:

[postgres@ip-172-31-32-99 bin]$ cat /opt/PostgreSQL-10/whr_dlog/start.log

2022-07-18 13:11:14.404 UTC [13418] LOG:  could not bind IPv4 address
"127.0.0.1": Address already in use

2022-07-18 13:11:14.404 UTC [13418] HINT:  Is another postmaster already
running on port 5432? If not, wait a few seconds and retry.

2022-07-18 13:11:14.404 UTC [13418] WARNING:  could not create listen
socket for "localhost"

2022-07-18 13:11:14.404 UTC [13418] FATAL:  could not create any TCP/IP
sockets

2022-07-18 13:11:14.404 UTC [13418] LOG:  database system is shut down

[postgres@ip-172-31-32-99 bin]$

On Mon, Jul 18, 2022 at 7:12 PM Ron  wrote:

> On 7/18/22 04:46, Guillaume Lelarge wrote:
>
> Hi,
>
> [snip]
>
> It would make things easier for you to use a single cluster, and as many
> databases as you want in this cluster.
>
>
> It would make *PITR* restoration *of a single database* (or small set of
> databases) impossible (since such backups -- and therefore restores -- are
> always binary "whole cluster").
>
> --
> Angular momentum makes the world go 'round.
>


Re: plan for function returning table combined with condition

2022-07-21 Thread Thierry Henrio
On Wed, Jul 20, 2022 at 8:39 PM Tom Lane  wrote:

> You want this SQL function to be inlined, but it isn't being.
> I think the reason is that (by default) it's VOLATILE, and
> inline_set_returning_function doesn't like that:
>
>  * Forget it if the function is not SQL-language or has other
> showstopper
>  * properties.  In particular it mustn't be declared STRICT, since we
>  * couldn't enforce that.  It also mustn't be VOLATILE, because that is
>  * supposed to cause it to be executed with its own snapshot, rather
> than
>  * sharing the snapshot of the calling query.  We also disallow
> returning
>  * SETOF VOID, because inlining would result in exposing the actual
> result
>  * of the function's last SELECT, which should not happen in that case.
>
> So try adding STABLE to the function definition.
>

Indeed, when I add STABLE to function, then planner uses index (same plan
as (B) in original post).
Thanks Tom.
, Thierry


Re: More than one Cluster on single server (single instance)

2022-07-21 Thread Daulat
 I have resolved this issue and it's working.
Thanks


On Thu, Jul 21, 2022 at 1:45 PM Daulat  wrote:

> Thanks for your inputs.
>
> We are thinking about this approach to consolidate some small applications
> of individual clients and to perform administration (data/log/pgbackrest
> backups etc.) separately which is not possible on a single cluster.
>
> We have tried to create and start the cluster with different port 5433 on
> separate data directory but getting some errors.
>
> -I replaced the default port with 5433 in the postgresql.conf
> file.
>
> -We are not using Ubuntu , hence, I think we can’t use the
> pg_cluster
>
> ./initdb -D /opt/PostgreSQL-10/whr_data1_5433 -U postgres -W
>
> ./pg_ctl -D /opt/PostgreSQL-10/whr_data1_5433 -l
> /opt/PostgreSQL-10/whr_dlog/start.log start
>
>
>
> Error while creating Cluster:
>
> [postgres@ip-172-31-32-99 bin]$ cat /opt/PostgreSQL-10/whr_dlog/start.log
>
> 2022-07-18 13:11:14.404 UTC [13418] LOG:  could not bind IPv4 address
> "127.0.0.1": Address already in use
>
> 2022-07-18 13:11:14.404 UTC [13418] HINT:  Is another postmaster already
> running on port 5432? If not, wait a few seconds and retry.
>
> 2022-07-18 13:11:14.404 UTC [13418] WARNING:  could not create listen
> socket for "localhost"
>
> 2022-07-18 13:11:14.404 UTC [13418] FATAL:  could not create any TCP/IP
> sockets
>
> 2022-07-18 13:11:14.404 UTC [13418] LOG:  database system is shut down
>
> [postgres@ip-172-31-32-99 bin]$
>
> On Mon, Jul 18, 2022 at 7:12 PM Ron  wrote:
>
>> On 7/18/22 04:46, Guillaume Lelarge wrote:
>>
>> Hi,
>>
>> [snip]
>>
>> It would make things easier for you to use a single cluster, and as many
>> databases as you want in this cluster.
>>
>>
>> It would make *PITR* restoration *of a single database* (or small set of
>> databases) impossible (since such backups -- and therefore restores -- are
>> always binary "whole cluster").
>>
>> --
>> Angular momentum makes the world go 'round.
>>
>


RE: [External]Re: Patroni & PostgreSQL issue

2022-07-21 Thread Menon, Deepak (Deepak)
Hi Abdul

Wanted to know if this is a standard Patroni feature ?

Any reason why the files are not deleted from the replicas when the files were 
deleted from the primary server.

Deepak Menon| Avaya Managed Services-Delivery|+91 9899012875| 
men...@avaya.com
Leave Alert : 19th , 20 July  2022

From: Abdul Sayeed 
Sent: Thursday, July 21, 2022 12:49 AM
To: Sankar, Uma (Uma) **CTR** 
Cc: pgsql-gene...@postgresql.org; Menon, Deepak (Deepak) 
Subject: [External]Re: Patroni & PostgreSQL issue

[External Sender]
Hi Uma,

If i understand your scenario correct, after failover, Patroni created deleted 
files on old primary by replciating from New primary?

If that is correct, i would recommend to check lag between new primary and old 
primary(now slave). if it is zero then we are good to perform failover.


Regards,
Abdul Sayeed

On Wed, Jul 20, 2022 at 5:29 PM Sankar, Uma (Uma) 
mailto:usan...@avaya.com>> wrote:
Hi All,

This is regarding the Postgres HA working with patroni in 3 node setup, we have 
an issue with the primary because a few database files were deleted manually so 
performed a switch over to move the services from primary to secondary with 
patroni, post the switchover was deleted file was re-created by itself in the 
old primary.

Now we are planning to switch back the services from secondary(Leader) to 
primary(replica) as the deleted files were re-created itself, can someone 
please suggest if this works normally when we switch back to promote primary as 
the leader and secondary as a replica.


Regards,
Uma Sankar



--
Thanks & Regards,
Abdul Sayeed
PostgreSQL DBA
Postgres Professional Certified
Skype: abdul.sayeed24


Unable to archive logs in standby server

2022-07-21 Thread Meera Nair
Hi team,

With non-exclusive backup method, trying backup from standby node.
But pg_stop_backup function returns "WAL archiving is not enabled..." and the 
logs are not archived to WAL directory configured.

Please check if I am missing anything in configuring this properly,
Server was restarted after setting the archiving params in postgresql.conf

Below is from version 14:

postgres=# select pg_start_backup('label', false, false);
pg_start_backup
-
0/6D8
(1 row)


postgres=#  select pg_stop_backup('false');
NOTICE:  WAL archiving is not enabled; you must ensure that all required WAL 
segments are copied through other means to complete the backup
  pg_stop_backup
---
(0/60001C0,"START WAL LOCATION: 0/6D8 (file 00010006)+
CHECKPOINT LOCATION: 0/6000110   +
BACKUP METHOD: streamed  +
BACKUP FROM: standby +
START TIME: 2022-07-21 12:42:11 IST  +
LABEL: label +
START TIMELINE: 1+
","")
(1 row)


postgres=# select pg_is_in_recovery();
pg_is_in_recovery
---
t
(1 row)


postgres=# show wal_level;
wal_level
---
replica
(1 row)


postgres=# show archive_mode;
archive_mode
--
on
(1 row)


postgres=# show archive_command;
archive_command

copy "%p" "D:\PostgreSQL\14\standby_14\wal\%f"
(1 row)



Regards,
Meera


Re: Unable to archive logs in standby server

2022-07-21 Thread Guillaume Lelarge
Hi,

Le jeu. 21 juil. 2022 à 13:58, Meera Nair  a écrit :

> Hi team,
>
>
>
> With non-exclusive backup method, trying backup from standby node.
>
> But pg_stop_backup function returns “WAL archiving is not enabled…” and
> the logs are not archived to WAL directory configured.
>
>
>
> Please check if I am missing anything in configuring this properly,
>
> Server was restarted after setting the archiving params in postgresql.conf
>
>
>
> Below is from version 14:
>
>
>
> postgres=# select pg_start_backup('label', false, false);
>
> pg_start_backup
>
> -
>
> 0/6D8
>
> (1 row)
>
>
>
>
>
> postgres=#  select pg_stop_backup('false');
>
> NOTICE:  WAL archiving is not enabled; you must ensure that all required
> WAL segments are copied through other means to complete the backup
>
>   pg_stop_backup
>
> ---
>
> (0/60001C0,"START WAL LOCATION: 0/6D8 (file 00010006)+
>
> CHECKPOINT LOCATION: 0/6000110   +
>
> BACKUP METHOD: streamed  +
>
> BACKUP FROM: standby +
>
> START TIME: 2022-07-21 12:42:11 IST  +
>
> LABEL: label +
>
> START TIMELINE: 1+
>
> ","")
>
> (1 row)
>
>
>
>
>
> postgres=# select pg_is_in_recovery();
>
> pg_is_in_recovery
>
> ---
>
> t
>
> (1 row)
>
>
>
>
>
> postgres=# show wal_level;
>
> wal_level
>
> ---
>
> replica
>
> (1 row)
>
>
>

>
> postgres=# show archive_mode;
>
> archive_mode
>
> --
>
> on
>
> (1 row)
>
>
>


You're doing backups from the standby, and to allow archiving on the
backups, archive_mode should be set to always.


-- 
Guillaume.


FK Constraint sort order with pg_dump

2022-07-21 Thread Christian Barthel
Hello,

The sorting order of FK constraints with the same name is based on the
OID (because it lands in the “Usually shouldn’t get here” OID comparison
block at [1]).  Wouldn’t it be better if the order of those constraints
were based on the table name?

Details:

The above schema is identical except in the order how the constraints
were added (the constraint name is the same on those two tables):

--8<---cut here---start->8---
  -- --- Schema Version 1:
  CREATE TABLE a (id int unique);

  CREATE TABLE b (id int);
  ALTER TABLE b ADD CONSTRAINT x_fkey FOREIGN KEY (id) REFERENCES a(id);
  CREATE TABLE c (id int);
  ALTER TABLE c ADD CONSTRAINT x_fkey FOREIGN KEY (id) REFERENCES a(id);

  -- --- Schema Version 2:

  CREATE TABLE a (id int unique);

  CREATE TABLE c (id int);
  ALTER TABLE c ADD CONSTRAINT x_fkey FOREIGN KEY (id) REFERENCES a(id);
  CREATE TABLE b (id int);
  ALTER TABLE b ADD CONSTRAINT x_fkey FOREIGN KEY (id) REFERENCES a(id);
--8<---cut here---end--->8---

Doing a pg_dump on Version 1 and Version 2 leads to two different dumps
despite being the same schema: (*)

--8<---cut here---start->8---
--- version12022-07-21 19:16:31.369010843 +0200
+++ version22022-07-21 19:16:26.688976178 +0200
@@ -86,18 +86,18 @@
 
 
 --
--- Name: b x_fkey; Type: FK CONSTRAINT; Schema: public; Owner: bch
+-- Name: c x_fkey; Type: FK CONSTRAINT; Schema: public; Owner: bch
 --
 
-ALTER TABLE ONLY public.b
+ALTER TABLE ONLY public.c
 ADD CONSTRAINT x_fkey FOREIGN KEY (id) REFERENCES public.a(id);
 
 
 --
--- Name: c x_fkey; Type: FK CONSTRAINT; Schema: public; Owner: bch
+-- Name: b x_fkey; Type: FK CONSTRAINT; Schema: public; Owner: bch
 --
 
-ALTER TABLE ONLY public.c
+ALTER TABLE ONLY public.b
 ADD CONSTRAINT x_fkey FOREIGN KEY (id) REFERENCES public.a(id);
--8<---cut here---end--->8---

Attached is a patch file that adds a string comparison function call to
sort FK constraints (based on the table if it exists).  Any thoughts on
that?

[1]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_dump/pg_dump_sort.c;h=80641cd79a2e6ce0a10bd55218b10d22ac369ed5;hb=7c850320d8cfa5503ecec61c2559661b924f7595#l212

(*) Tested on 14.4
-- 
Christian Barthel

modified   src/bin/pg_dump/pg_dump_sort.c
@@ -300,6 +300,23 @@ DOTypeNameCompare(const void *p1, const void *p2)
if (cmpval != 0)
return cmpval;
}
+	else if (obj1->objType == DO_FK_CONSTRAINT)
+	{
+		ConstraintInfo *cobj1 = *(ConstraintInfo *const *)p1;
+		ConstraintInfo *cobj2 = *(ConstraintInfo *const *)p2;
+
+		if (cobj1->contable != NULL && cobj2->contable != NULL) {
+			/* Sort two FK constraints with the same name by their
+			 * corresponding relationname:
+			 */
+			cmpval = strcmp(
+cobj1->contable->dobj.name,
+cobj2->contable->dobj.name);
+			if (cmpval != 0)
+return cmpval;
+		}
+	}

/* Usually shouldn't get here, but if we do, sort by OID */
return oidcmp(obj1->catId.oid, obj2->catId.oid);

Re: FK Constraint sort order with pg_dump

2022-07-21 Thread Adrian Klaver

On 7/21/22 10:25, Christian Barthel wrote:

Hello,

The sorting order of FK constraints with the same name is based on the
OID (because it lands in the “Usually shouldn’t get here” OID comparison
block at [1]).  Wouldn’t it be better if the order of those constraints
were based on the table name?



Why does it matter?

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: FK Constraint sort order with pg_dump

2022-07-21 Thread David G. Johnston
On Thu, Jul 21, 2022 at 10:49 AM Adrian Klaver 
wrote:

> On 7/21/22 10:25, Christian Barthel wrote:
> > Hello,
> >
> > The sorting order of FK constraints with the same name is based on the
> > OID (because it lands in the “Usually shouldn’t get here” OID comparison
> > block at [1]).  Wouldn’t it be better if the order of those constraints
> > were based on the table name?
> >
>
> Why does it matter?
>
>
As the code comment says:

/* To have a stable sort order, break ties for some object types */

This seems like it is simply a missed case.

David J.


Re: FK Constraint sort order with pg_dump

2022-07-21 Thread Christian Barthel
On Thursday, July 21, 2022, Adrian Klaver wrote:

> On 7/21/22 10:25, Christian Barthel wrote:
>> Hello, The sorting order of FK constraints with the same name is
>> based on the OID (because it lands in the “Usually shouldn’t get
>> here” OID comparison block at [1]).  Wouldn’t it be better if the
>> order of those constraints were based on the table name?
>> 
>
> Why does it matter?

As the comment in pg_dump.c states, logically identical schemas should
produce identical dumps:

| * We rely on dependency information to help us determine a safe order,
| so * the initial sort is mostly for cosmetic purposes: we sort by name
| to * ensure that logically identical schemas will dump identically.
  


This is done for most objects (tables, functions etc).  Why not for FK
constraints?

It makes comparing schemas on different postgres instances simpler
(i.e. when you’re working with testing, staging, live systems etc).

-- 
Christian Barthel




Re: FK Constraint sort order with pg_dump

2022-07-21 Thread Adrian Klaver

On 7/21/22 10:59, Christian Barthel wrote:

On Thursday, July 21, 2022, Adrian Klaver wrote:



Why does it matter?


As the comment in pg_dump.c states, logically identical schemas should
produce identical dumps:

| * We rely on dependency information to help us determine a safe order,
| so * the initial sort is mostly for cosmetic purposes: we sort by name
| to * ensure that logically identical schemas will dump identically.
   


This is done for most objects (tables, functions etc).  Why not for FK
constraints?

It makes comparing schemas on different postgres instances simpler
(i.e. when you’re working with testing, staging, live systems etc).



Alright that I can see.


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: FK Constraint sort order with pg_dump

2022-07-21 Thread Tom Lane
Christian Barthel  writes:
> On Thursday, July 21, 2022, Adrian Klaver wrote:
>> Why does it matter?

> As the comment in pg_dump.c states, logically identical schemas should
> produce identical dumps:

Agreed, but this is far from the only deficiency in DOTypeNameCompare.
If we're going to try to fill in the gaps, we should be systematic
about it.  I took a quick stab at implementing the cases it omits,
as attached.  There are still a few gaps:

* DO_OPCLASS and DO_OPFAMILY ought to have a subsidiary sort on the
access method name, since their names are only unique per-access-method.
The trouble here is that OpclassInfo/OpfamilyInfo don't provide any
way to get to the access method.  That could be fixed with more fields,
and maybe it's worth doing, but I didn't do that here.

* For casts, the cast name is consed up as the concatenation of the source
and target type names, which isn't enough to guarantee uniqueness.  We
could add the types' schema names, perhaps ... is it worth the trouble?
(Not to mention possible breakage of scripts that expect the current
naming convention.)

* Likewise for transforms, we'd need to add the type's schema name if
we want the transform name to be unique.

Not sure whether it's worth venturing into such nonlocal fixes.

regards, tom lane

diff --git a/src/bin/pg_dump/pg_dump_sort.c b/src/bin/pg_dump/pg_dump_sort.c
index 5de3241eb4..2282c002ae 100644
--- a/src/bin/pg_dump/pg_dump_sort.c
+++ b/src/bin/pg_dump/pg_dump_sort.c
@@ -280,6 +280,34 @@ DOTypeNameCompare(const void *p1, const void *p2)
 		if (cmpval != 0)
 			return cmpval;
 	}
+	else if (obj1->objType == DO_CONSTRAINT ||
+			 obj1->objType == DO_FK_CONSTRAINT)
+	{
+		ConstraintInfo *cobj1 = *(ConstraintInfo *const *) p1;
+		ConstraintInfo *cobj2 = *(ConstraintInfo *const *) p2;
+		const char *objname1;
+		const char *objname2;
+
+		/* Sort by owning object name (namespace was considered already) */
+		if (cobj1->contable)
+			objname1 = cobj1->contable->dobj.name;
+		else if (cobj1->condomain)
+			objname1 = cobj1->condomain->dobj.name;
+		else
+			objname1 = NULL;
+		if (cobj2->contable)
+			objname2 = cobj2->contable->dobj.name;
+		else if (cobj2->condomain)
+			objname2 = cobj2->condomain->dobj.name;
+		else
+			objname2 = NULL;
+		if (objname1 && objname2)
+		{
+			cmpval = strcmp(objname1, objname2);
+			if (cmpval != 0)
+return cmpval;
+		}
+	}
 	else if (obj1->objType == DO_POLICY)
 	{
 		PolicyInfo *pobj1 = *(PolicyInfo *const *) p1;
@@ -291,6 +319,17 @@ DOTypeNameCompare(const void *p1, const void *p2)
 		if (cmpval != 0)
 			return cmpval;
 	}
+	else if (obj1->objType == DO_RULE)
+	{
+		RuleInfo   *robj1 = *(RuleInfo *const *) p1;
+		RuleInfo   *robj2 = *(RuleInfo *const *) p2;
+
+		/* Sort by table name (table namespace was considered already) */
+		cmpval = strcmp(robj1->ruletable->dobj.name,
+		robj2->ruletable->dobj.name);
+		if (cmpval != 0)
+			return cmpval;
+	}
 	else if (obj1->objType == DO_TRIGGER)
 	{
 		TriggerInfo *tobj1 = *(TriggerInfo *const *) p1;
@@ -302,6 +341,39 @@ DOTypeNameCompare(const void *p1, const void *p2)
 		if (cmpval != 0)
 			return cmpval;
 	}
+	else if (obj1->objType == DO_DEFAULT_ACL)
+	{
+		DefaultACLInfo *daclobj1 = *(DefaultACLInfo *const *) p1;
+		DefaultACLInfo *daclobj2 = *(DefaultACLInfo *const *) p2;
+
+		/* Sort by role name (objtype and namespace were considered already) */
+		cmpval = strcmp(daclobj1->defaclrole,
+		daclobj2->defaclrole);
+		if (cmpval != 0)
+			return cmpval;
+	}
+	else if (obj1->objType == DO_PUBLICATION_REL)
+	{
+		PublicationRelInfo *probj1 = *(PublicationRelInfo *const *) p1;
+		PublicationRelInfo *probj2 = *(PublicationRelInfo *const *) p2;
+
+		/* Sort by publication name (table name/nsp was considered already) */
+		cmpval = strcmp(probj1->publication->dobj.name,
+		probj2->publication->dobj.name);
+		if (cmpval != 0)
+			return cmpval;
+	}
+	else if (obj1->objType == DO_PUBLICATION_TABLE_IN_SCHEMA)
+	{
+		PublicationSchemaInfo *psobj1 = *(PublicationSchemaInfo *const *) p1;
+		PublicationSchemaInfo *psobj2 = *(PublicationSchemaInfo *const *) p2;
+
+		/* Sort by publication name (schema name was considered already) */
+		cmpval = strcmp(psobj1->publication->dobj.name,
+		psobj2->publication->dobj.name);
+		if (cmpval != 0)
+			return cmpval;
+	}
 
 	/* Usually shouldn't get here, but if we do, sort by OID */
 	return oidcmp(obj1->catId.oid, obj2->catId.oid);


Re: could not link file in wal restore lines

2022-07-21 Thread Zsolt Ero
 Can someone with knowledge about PG tell me if this is a bug and if there
is any need to worry about data corruption?

Here is the issue which I've opened for pgbackrest, the developer confirmed
that pgbackrest doesn't change those files at all.

https://github.com/pgbackrest/pgbackrest/issues/1815

It's PG 12 on Ubuntu 18.04. I always start with a clean data folder and can
replicate this 100% any time I do a restore. Single master, no cluster or
replication, backup/restore is from the same partition.

Thanks,
Zsolt




On 15. Jul 2022 at 22:11:09, Zsolt Ero  wrote:

> Hi,
>
> I'm testing pgbackrest restore.
>
> It works well, but in the pg logs I get the following lines:
>
> could not link file "pg_wal/000101560098" to
> "pg_wal/00010157006E": File exists
>
> In total, I get 18 lines of "could not link file" and 932 lines of
> "restored log file" lines.
>
> At the end it finishes with:
>
> redo done at 15A/A001710
> last completed transaction was at log time 2022-07-15 19:41:05.175573+00
> restored log file "0003015A000A" from archive
> selected new timeline ID: 4
> archive recovery complete
> restored log file "0003.history" from archive
> database system is ready to accept connections
>
> Before going in production, I wanted to ask, if this is correct like this?
> I mean are those "could not link file" lines anything to worry about?
>
> Thanks and regards,
> Zsolt
>
>
>
>


Is psqlodbc_13_02 compatible to M365?

2022-07-21 Thread Taka Taka
Hello.
I would like to know if psqlodbc_13_02 is compatible with M365.
Also, could you please tell me which of the psqlodbc_13_02 would be
suitable to Windows 10 64-bit?