Logical replication slots on slaves/replicas?

2024-08-01 Thread Piotr Andreassen Blasiak
Hi,

I know that currently logical replication slots are available only for primary 
servers. Is there any plan to add this feature to read slaves as well? My 
problem is this:

I want to use debezium to stream changes from postgresql. But, if I stream 
changes from the master I can not query my read slaves for related data to 
these changes - I need to always query the master which is not scalable. So 
either I need a way to be able to know when the change has been propagated to 
my read replica so that I can reliably query it, or I am hoping I can simply 
read all the changes from the read replica which will mean it is already up to 
date when I query it.

Piotr Andreassen Blasiak






Re: Logical replication slots on slaves/replicas?

2024-08-01 Thread shammat
Piotr Andreassen Blasiak schrieb am 01.08.2024 um 10:42:
> I know that currently logical replication slots are available only
> for primary servers. Is there any plan to add this feature to read
> slaves as well? My problem is this:
>
> I want to use debezium to stream changes from postgresql. But, if I
> stream changes from the master I can not query my read slaves for
> related data to these changes - I need to always query the master
> which is not scalable. So either I need a way to be able to know
> when the change has been propagated to my read replica so that I can
> reliably query it, or I am hoping I can simply read all the changes
> from the read replica which will mean it is already up to date when
> I query it.

I don't really know how Debezium uses this, but will this enhancement
in Postgres 17 help here?

https://www.postgresql.org/docs/17/functions-admin.html#PG-SYNC-REPLICATION-SLOTS

> Synchronize the logical failover replication slots from the primary
> server to the standby server. This function can only be executed on
> the standby server. Temporary synced slots, if any, cannot be used
> for logical decoding and must be dropped after promotion.





PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread KK CHN
List,

*Not working (start EPAS server always fails):*

1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
Repo   Server)

  When I do a PTR

1.  After doing a table drop and then
2. Noting down the time stamp and then
3. Taking an incremental backup in hope that If I do a restore from this
incr Backup, that won't  contain the  dropped table.
4. Correct me  if I am  conceptually wrong here.
5.  I am *never *successful in restoring the EPAS server in this scenario.


*I know the following will work for me, w*hy not the above one if I really
want that state of cluster also  ?

*This is Working. *
 1. Create table
2. Noting down the timestamp
3.  Taking incremental backup on RepoServer.
4. drop the created table .
5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
backup  and target= step 2 time stamp .. It finished the pgaback restore
and promote command
6. I am able to start back the  EPAS server and see the dropped table
recovered there.

But If I want a PTR as in the first section it fails.. Why ?

Thank you,
Krishane





*What I have done and results obtained: *

Created a table important_table4 in my EPAS and note down the time after
creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )

Then I performed an Incremental backup   (incr backup:
20240729-160137F_20240801-141148I )
timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01 14:11:52+05:30


Now I dropped the table table4 from the EPAS and noted down the time


I want to  restore the table4,, so I stopped EPAS and executed

$ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
--set=20240729-160137F_20240801-141148I  --target-timeline=current
--type=time  --target="01-AUG-24 14:08:32.447796+05:30"
--target-action=promote restore

IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
important_table4 back.

root@service01 ~]# sudo -u enterprisedb psql edb
psql (16.3.0)
Type "help" for help.

edb=# \dt
List of relations
 Schema |   Name   | Type  |Owner
+--+---+--
 public | important_table  | table | enterprisedb
 public | important_table2 | table | enterprisedb
 public | important_table3 | table | enterprisedb
 public | important_table4 | table | enterprisedb
(4 rows)

SO all works fine  .


*But Now the PROBLEM Statement. *

*1. I am dropping the table table 4 again *
edb=# \q
[root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
important_table4; commit;" edb
BEGIN
DROP TABLE
COMMIT
*2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )

3. Now  I am performing an incremental backup after step 2  on REPO SErver
( Hoping that this latest INCR Backup is without dropped important_table4,
so that a recovery of the cluster  shouldn't show the table4 again. )

incr backup details. : 20240729-160137F_20240801-142433I
timestamp start/stop*: 2024-08-01 14:24:33+05:30 / 2024-08-01
14:24:36+05:30*

4. Now I want to test the database recovery  after dropping the table4 in
step1 to verify that my EPAS restores from the backup in step 3 and time
stamp (01-AUG-24 14:23:22.085076 +05:30,   so that  the restored EPAS
cluster doesn't contain the important_table4.

5.  $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
 --set=20240729-160137F_20240801-142433I  --target-timeline=current
--type=time  --target="01-AUG-24 14:23:22.085076+05:30"
--target-action=promote restore
 
-
INFO: restore command end: completed successfully (1035ms)

*ISSUE:I am unable to get the EPAS Server* in running state after step
5

 *What am I doing wrong ?  OR am I conceptually wrong ?*




OUTPUT on executing step 5.

[root@service01 ~]# sudo -u enterprisedb pgbackrest --stanza=Demo_Repo
--delta --set=20240729-160137F_20240801-142433I  --target-timeline=current
--type=time  --target="01-AUG-24 14:23:22.085076+05:30"
--target-action=promote restore

2024-08-01 14:30:03.535 P00   INFO: restore command begin 2.52.1: --delta
--exec-id=82738-b5fe7415 --log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/edb/as16/data --pg-version-force=16
--repo1-host=10.10.20.7 --repo1-host-user=postgres
--set=20240729-160137F_20240801-142433I --stanza=Demo_Repo
--target="01-AUG-24 14:23:22.085076+05:30" --target-action=promote
--target-timeline=current --type=time
2024-08-01 14:30:03.880 P00   INFO: repo1: restore backup set
20240729-160137F_20240801-142433I, recovery will start at 2024-08-01
14:24:33
2024-08-01 14:30:03.881 P00   INFO: remove invalid files/links/paths from
'/var/lib/edb/as16/data'
2024-08-01 14:30:04.567 P00   INFO: write updated
/var/lib/edb/as16/data/postgresql.auto.conf
2024-08-01 14:30:04.569 P00   INFO: restore global/pg_control (performed
last to ensure aborted restores cannot be started)
2024-08-01 14:30:04.5

Re: PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread Kashif Zeeshan
Hi

On Thu, Aug 1, 2024 at 2:54 PM KK CHN  wrote:

> List,
>
> *Not working (start EPAS server always fails):*
>
> 1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
> Repo   Server)
>
>   When I do a PTR
>
> 1.  After doing a table drop and then
> 2. Noting down the time stamp and then
> 3. Taking an incremental backup in hope that If I do a restore from this
> incr Backup, that won't  contain the  dropped table.
> 4. Correct me  if I am  conceptually wrong here.
> 5.  I am *never *successful in restoring the EPAS server in this scenario.
>
>
> *I know the following will work for me, w*hy not the above one if I
> really want that state of cluster also  ?
>
> *This is Working. *
>  1. Create table
> 2. Noting down the timestamp
> 3.  Taking incremental backup on RepoServer.
> 4. drop the created table .
> 5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
> backup  and target= step 2 time stamp .. It finished the pgaback restore
> and promote command
> 6. I am able to start back the  EPAS server and see the dropped table
> recovered there.
>
> But If I want a PTR as in the first section it fails.. Why ?
>
> Thank you,
> Krishane
>
>
>
>
>
> *What I have done and results obtained: *
>
> Created a table important_table4 in my EPAS and note down the time after
> creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )
>
> Then I performed an Incremental backup   (incr backup:
> 20240729-160137F_20240801-141148I )
> timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01 14:11:52+05:30
>
>
> Now I dropped the table table4 from the EPAS and noted down the time
>
>
> I want to  restore the table4,, so I stopped EPAS and executed
>
> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
> --set=20240729-160137F_20240801-141148I  --target-timeline=current
> --type=time  --target="01-AUG-24 14:08:32.447796+05:30"
> --target-action=promote restore
>
> IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
> important_table4 back.
>
> root@service01 ~]# sudo -u enterprisedb psql edb
> psql (16.3.0)
> Type "help" for help.
>
> edb=# \dt
> List of relations
>  Schema |   Name   | Type  |Owner
> +--+---+--
>  public | important_table  | table | enterprisedb
>  public | important_table2 | table | enterprisedb
>  public | important_table3 | table | enterprisedb
>  public | important_table4 | table | enterprisedb
> (4 rows)
>
> SO all works fine  .
>
>
> *But Now the PROBLEM Statement. *
>
> *1. I am dropping the table table 4 again *
> edb=# \q
> [root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
> important_table4; commit;" edb
> BEGIN
> DROP TABLE
> COMMIT
> *2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
> current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
> Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )
>
> 3. Now  I am performing an incremental backup after step 2  on REPO SErver
> ( Hoping that this latest INCR Backup is without dropped important_table4,
> so that a recovery of the cluster  shouldn't show the table4 again. )
>
> incr backup details. : 20240729-160137F_20240801-142433I
> timestamp start/stop*: 2024-08-01 14:24:33+05:30 / 2024-08-01
> 14:24:36+05:30*
>
> 4. Now I want to test the database recovery  after dropping the table4 in
> step1 to verify that my EPAS restores from the backup in step 3 and time
> stamp (01-AUG-24 14:23:22.085076 +05:30,   so that  the restored EPAS
> cluster doesn't contain the important_table4.
>
> 5.  $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>  --set=20240729-160137F_20240801-142433I  --target-timeline=current
> --type=time  --target="01-AUG-24 14:23:22.085076+05:30"
> --target-action=promote restore
>  
> -
> INFO: restore command end: completed successfully (1035ms)
>
> *ISSUE:I am unable to get the EPAS Server* in running state after
> step 5
>
>  *What am I doing wrong ?  OR am I conceptually wrong ?*
>
>
>
>
> OUTPUT on executing step 5.
>
> [root@service01 ~]# sudo -u enterprisedb pgbackrest --stanza=Demo_Repo
> --delta --set=20240729-160137F_20240801-142433I  --target-timeline=current
> --type=time  --target="01-AUG-24 14:23:22.085076+05:30"
> --target-action=promote restore
>
> 2024-08-01 14:30:03.535 P00   INFO: restore command begin 2.52.1: --delta
> --exec-id=82738-b5fe7415 --log-level-console=info --log-level-file=debug
> --pg1-path=/var/lib/edb/as16/data --pg-version-force=16
> --repo1-host=10.10.20.7 --repo1-host-user=postgres
> --set=20240729-160137F_20240801-142433I --stanza=Demo_Repo
> --target="01-AUG-24 14:23:22.085076+05:30" --target-action=promote
> --target-timeline=current --type=time
> 2024-08-01 14:30:03.880 P00   INFO: repo1: restore backup set
> 20240729-160137F_20240801-142433I, recovery will start at 2024-08-01
> 14:24:33
> 2024-08-01 14:30:03.881 P00   INFO: remove inva

Re: Logical replication slots on slaves/replicas?

2024-08-01 Thread Torsten Förtsch
A COMMIT record in the WAL has an LSN. I don't know much about debezuim but
wal2json can give you this LSN. Then there is this
function, pg_last_wal_replay_lsn(). I think you could run it on the replica
to see if you are after the point of commit. If you are, you should be able
to see the changes made by the transaction.

BTW, I am sure I can construct a situation where you see your transaction
in the logical wal stream before it becomes visible on the master. In
particular, this is possible with a sync replication setup. So, I would not
trust to always be able to see the transaction on the master as soon as it
is reported committed in the wal stream.

All the best,
Torsten

On Thu, Aug 1, 2024 at 10:42 AM Piotr Andreassen Blasiak <
pi...@attendium.com> wrote:

> Hi,
>
> I know that currently logical replication slots are available only for
> primary servers. Is there any plan to add this feature to read slaves as
> well? My problem is this:
>
> I want to use debezium to stream changes from postgresql. But, if I stream
> changes from the master I can not query my read slaves for related data to
> these changes - I need to always query the master which is not scalable. So
> either I need a way to be able to know when the change has been propagated
> to my read replica so that I can reliably query it, or I am hoping I can
> simply read all the changes from the read replica which will mean it is
> already up to date when I query it.
>
> Piotr Andreassen Blasiak
>
>
>
>
>


Re: PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread KK CHN
The logs are here.

https://pastecode.io/s/s5dp8ur1



On Thu, Aug 1, 2024 at 3:30 PM Kashif Zeeshan 
wrote:

> Hi
>
> On Thu, Aug 1, 2024 at 2:54 PM KK CHN  wrote:
>
>> List,
>>
>> *Not working (start EPAS server always fails):*
>>
>> 1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
>> Repo   Server)
>>
>>   When I do a PTR
>>
>> 1.  After doing a table drop and then
>> 2. Noting down the time stamp and then
>> 3. Taking an incremental backup in hope that If I do a restore from this
>> incr Backup, that won't  contain the  dropped table.
>> 4. Correct me  if I am  conceptually wrong here.
>> 5.  I am *never *successful in restoring the EPAS server in this
>> scenario.
>>
>>
>> *I know the following will work for me, w*hy not the above one if I
>> really want that state of cluster also  ?
>>
>> *This is Working. *
>>  1. Create table
>> 2. Noting down the timestamp
>> 3.  Taking incremental backup on RepoServer.
>> 4. drop the created table .
>> 5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
>> backup  and target= step 2 time stamp .. It finished the pgaback restore
>> and promote command
>> 6. I am able to start back the  EPAS server and see the dropped table
>> recovered there.
>>
>> But If I want a PTR as in the first section it fails.. Why ?
>>
>> Thank you,
>> Krishane
>>
>>
>>
>>
>>
>> *What I have done and results obtained: *
>>
>> Created a table important_table4 in my EPAS and note down the time after
>> creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )
>>
>> Then I performed an Incremental backup   (incr backup:
>> 20240729-160137F_20240801-141148I )
>> timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01
>> 14:11:52+05:30
>>
>>
>> Now I dropped the table table4 from the EPAS and noted down the time
>>
>>
>> I want to  restore the table4,, so I stopped EPAS and executed
>>
>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>> --set=20240729-160137F_20240801-141148I  --target-timeline=current
>> --type=time  --target="01-AUG-24 14:08:32.447796+05:30"
>> --target-action=promote restore
>>
>> IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
>> important_table4 back.
>>
>> root@service01 ~]# sudo -u enterprisedb psql edb
>> psql (16.3.0)
>> Type "help" for help.
>>
>> edb=# \dt
>> List of relations
>>  Schema |   Name   | Type  |Owner
>> +--+---+--
>>  public | important_table  | table | enterprisedb
>>  public | important_table2 | table | enterprisedb
>>  public | important_table3 | table | enterprisedb
>>  public | important_table4 | table | enterprisedb
>> (4 rows)
>>
>> SO all works fine  .
>>
>>
>> *But Now the PROBLEM Statement. *
>>
>> *1. I am dropping the table table 4 again *
>> edb=# \q
>> [root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
>> important_table4; commit;" edb
>> BEGIN
>> DROP TABLE
>> COMMIT
>> *2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
>> current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
>> Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )
>>
>> 3. Now  I am performing an incremental backup after step 2  on REPO
>> SErver ( Hoping that this latest INCR Backup is without dropped
>> important_table4, so that a recovery of the cluster  shouldn't show the
>> table4 again. )
>>
>> incr backup details. : 20240729-160137F_20240801-142433I
>> timestamp start/stop*: 2024-08-01 14:24:33+05:30 /
>> 2024-08-01 14:24:36+05:30*
>>
>> 4. Now I want to test the database recovery  after dropping the table4 in
>> step1 to verify that my EPAS restores from the backup in step 3 and time
>> stamp (01-AUG-24 14:23:22.085076 +05:30,   so that  the restored EPAS
>> cluster doesn't contain the important_table4.
>>
>> 5.  $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>>  --set=20240729-160137F_20240801-142433I  --target-timeline=current
>> --type=time  --target="01-AUG-24 14:23:22.085076+05:30"
>> --target-action=promote restore
>>  
>> -
>> INFO: restore command end: completed successfully (1035ms)
>>
>> *ISSUE:I am unable to get the EPAS Server* in running state after
>> step 5
>>
>>  *What am I doing wrong ?  OR am I conceptually wrong ?*
>>
>>
>>
>>
>> OUTPUT on executing step 5.
>>
>> [root@service01 ~]# sudo -u enterprisedb pgbackrest --stanza=Demo_Repo
>> --delta --set=20240729-160137F_20240801-142433I  --target-timeline=current
>> --type=time  --target="01-AUG-24 14:23:22.085076+05:30"
>> --target-action=promote restore
>>
>> 2024-08-01 14:30:03.535 P00   INFO: restore command begin 2.52.1: --delta
>> --exec-id=82738-b5fe7415 --log-level-console=info --log-level-file=debug
>> --pg1-path=/var/lib/edb/as16/data --pg-version-force=16
>> --repo1-host=10.10.20.7 --repo1-host-user=postgres
>> --set=20240729-160137F_20240801-142433I --stanza=Demo_Repo
>> --target="01-AUG-24 14:23:22.085076+05:30"

Re: PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread Mateusz Henicz
When you are performing PITR you need to configure a timestamp before your
last committed transaction. In your case you provided timestamp after your
last commit.

When postgtes is restoring until a specified point, it restores a
transaction from WAL, and checking if next transaction is before or after
said timestamp. If it is before it will replay it and check next
transaction. Until next transaction is after configured timestamp.
If there is no transaction after your current timestamp in current WAL,
postgres will try to restore next WAL from archive. And since there is no
next WAL, and your timestamp is past latest committed transaction, it is
unable to continue, because it does not know if there should be any other
transaction replayed or not.

Just perform some other actions after you note down the timestamp after
drop table. Create another one, insert some data, do whatever to have
another transaction in WALs.

Cheers,
Mateusz

czw., 1 sie 2024 o 12:23 KK CHN  napisał(a):

> The logs are here.
>
> https://pastecode.io/s/s5dp8ur1
>
>
>
> On Thu, Aug 1, 2024 at 3:30 PM Kashif Zeeshan 
> wrote:
>
>> Hi
>>
>> On Thu, Aug 1, 2024 at 2:54 PM KK CHN  wrote:
>>
>>> List,
>>>
>>> *Not working (start EPAS server always fails):*
>>>
>>> 1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
>>> Repo   Server)
>>>
>>>   When I do a PTR
>>>
>>> 1.  After doing a table drop and then
>>> 2. Noting down the time stamp and then
>>> 3. Taking an incremental backup in hope that If I do a restore from this
>>> incr Backup, that won't  contain the  dropped table.
>>> 4. Correct me  if I am  conceptually wrong here.
>>> 5.  I am *never *successful in restoring the EPAS server in this
>>> scenario.
>>>
>>>
>>> *I know the following will work for me, w*hy not the above one if I
>>> really want that state of cluster also  ?
>>>
>>> *This is Working. *
>>>  1. Create table
>>> 2. Noting down the timestamp
>>> 3.  Taking incremental backup on RepoServer.
>>> 4. drop the created table .
>>> 5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
>>> backup  and target= step 2 time stamp .. It finished the pgaback restore
>>> and promote command
>>> 6. I am able to start back the  EPAS server and see the dropped table
>>> recovered there.
>>>
>>> But If I want a PTR as in the first section it fails.. Why ?
>>>
>>> Thank you,
>>> Krishane
>>>
>>>
>>>
>>>
>>>
>>> *What I have done and results obtained: *
>>>
>>> Created a table important_table4 in my EPAS and note down the time after
>>> creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )
>>>
>>> Then I performed an Incremental backup   (incr backup:
>>> 20240729-160137F_20240801-141148I )
>>> timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01
>>> 14:11:52+05:30
>>>
>>>
>>> Now I dropped the table table4 from the EPAS and noted down the time
>>>
>>>
>>> I want to  restore the table4,, so I stopped EPAS and executed
>>>
>>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>>> --set=20240729-160137F_20240801-141148I  --target-timeline=current
>>> --type=time  --target="01-AUG-24 14:08:32.447796+05:30"
>>> --target-action=promote restore
>>>
>>> IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
>>> important_table4 back.
>>>
>>> root@service01 ~]# sudo -u enterprisedb psql edb
>>> psql (16.3.0)
>>> Type "help" for help.
>>>
>>> edb=# \dt
>>> List of relations
>>>  Schema |   Name   | Type  |Owner
>>> +--+---+--
>>>  public | important_table  | table | enterprisedb
>>>  public | important_table2 | table | enterprisedb
>>>  public | important_table3 | table | enterprisedb
>>>  public | important_table4 | table | enterprisedb
>>> (4 rows)
>>>
>>> SO all works fine  .
>>>
>>>
>>> *But Now the PROBLEM Statement. *
>>>
>>> *1. I am dropping the table table 4 again *
>>> edb=# \q
>>> [root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
>>> important_table4; commit;" edb
>>> BEGIN
>>> DROP TABLE
>>> COMMIT
>>> *2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
>>> current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
>>> Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )
>>>
>>> 3. Now  I am performing an incremental backup after step 2  on REPO
>>> SErver ( Hoping that this latest INCR Backup is without dropped
>>> important_table4, so that a recovery of the cluster  shouldn't show the
>>> table4 again. )
>>>
>>> incr backup details. : 20240729-160137F_20240801-142433I
>>> timestamp start/stop*: 2024-08-01 14:24:33+05:30 /
>>> 2024-08-01 14:24:36+05:30*
>>>
>>> 4. Now I want to test the database recovery  after dropping the table4
>>> in step1 to verify that my EPAS restores from the backup in step 3 and time
>>> stamp (01-AUG-24 14:23:22.085076 +05:30,   so that  the restored EPAS
>>> cluster doesn't contain the important_table4.
>>>
>>> 5.  $ sudo -u enterpri

Re: PgBackRest PTR recovery: After table drop to get dropped state

2024-08-01 Thread KK CHN
sorry ignore the previous paste , was in complete

Here the full one.   https://pastecode.io/s/hya0fyvo


On Thu, Aug 1, 2024 at 4:02 PM KK CHN  wrote:

> The logs are here.
>
> https://pastecode.io/s/s5dp8ur1
>
>
>
> On Thu, Aug 1, 2024 at 3:30 PM Kashif Zeeshan 
> wrote:
>
>> Hi
>>
>> On Thu, Aug 1, 2024 at 2:54 PM KK CHN  wrote:
>>
>>> List,
>>>
>>> *Not working (start EPAS server always fails):*
>>>
>>> 1. Testing PTR using  PgBackRest(2.52.1)  on RHEL9  EPAS-16, and RHEL9 (
>>> Repo   Server)
>>>
>>>   When I do a PTR
>>>
>>> 1.  After doing a table drop and then
>>> 2. Noting down the time stamp and then
>>> 3. Taking an incremental backup in hope that If I do a restore from this
>>> incr Backup, that won't  contain the  dropped table.
>>> 4. Correct me  if I am  conceptually wrong here.
>>> 5.  I am *never *successful in restoring the EPAS server in this
>>> scenario.
>>>
>>>
>>> *I know the following will work for me, w*hy not the above one if I
>>> really want that state of cluster also  ?
>>>
>>> *This is Working. *
>>>  1. Create table
>>> 2. Noting down the timestamp
>>> 3.  Taking incremental backup on RepoServer.
>>> 4. drop the created table .
>>> 5. Then stop the EPAS server and do a  PTR, by the  --set=step 3 incr
>>> backup  and target= step 2 time stamp .. It finished the pgaback restore
>>> and promote command
>>> 6. I am able to start back the  EPAS server and see the dropped table
>>> recovered there.
>>>
>>> But If I want a PTR as in the first section it fails.. Why ?
>>>
>>> Thank you,
>>> Krishane
>>>
>>>
>>>
>>>
>>>
>>> *What I have done and results obtained: *
>>>
>>> Created a table important_table4 in my EPAS and note down the time after
>>> creation of this table it is  ( t1 :  "01-AUG-24 14:08:32.447796+05:30" )
>>>
>>> Then I performed an Incremental backup   (incr backup:
>>> 20240729-160137F_20240801-141148I )
>>> timestamp start/stop: 2024-08-01 14:11:48+05:30 / 2024-08-01
>>> 14:11:52+05:30
>>>
>>>
>>> Now I dropped the table table4 from the EPAS and noted down the time
>>>
>>>
>>> I want to  restore the table4,, so I stopped EPAS and executed
>>>
>>> $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>>> --set=20240729-160137F_20240801-141148I  --target-timeline=current
>>> --type=time  --target="01-AUG-24 14:08:32.447796+05:30"
>>> --target-action=promote restore
>>>
>>> IT WORKS AS EXPECTED .. after restarting the EPAS I am able to get the
>>> important_table4 back.
>>>
>>> root@service01 ~]# sudo -u enterprisedb psql edb
>>> psql (16.3.0)
>>> Type "help" for help.
>>>
>>> edb=# \dt
>>> List of relations
>>>  Schema |   Name   | Type  |Owner
>>> +--+---+--
>>>  public | important_table  | table | enterprisedb
>>>  public | important_table2 | table | enterprisedb
>>>  public | important_table3 | table | enterprisedb
>>>  public | important_table4 | table | enterprisedb
>>> (4 rows)
>>>
>>> SO all works fine  .
>>>
>>>
>>> *But Now the PROBLEM Statement. *
>>>
>>> *1. I am dropping the table table 4 again *
>>> edb=# \q
>>> [root@service01 ~]# sudo -u enterprisedb psql -c "begin; drop table
>>> important_table4; commit;" edb
>>> BEGIN
>>> DROP TABLE
>>> COMMIT
>>> *2 .  [root@service01 ~]#* sudo -u enterprisedb psql -Atc "select
>>> current_timestamp" edb  01-AUG-24 14:23:22.085076 +05:30
>>> Noting the time as :   (01-AUG-24 14:23:22.085076 +05:30 )
>>>
>>> 3. Now  I am performing an incremental backup after step 2  on REPO
>>> SErver ( Hoping that this latest INCR Backup is without dropped
>>> important_table4, so that a recovery of the cluster  shouldn't show the
>>> table4 again. )
>>>
>>> incr backup details. : 20240729-160137F_20240801-142433I
>>> timestamp start/stop*: 2024-08-01 14:24:33+05:30 /
>>> 2024-08-01 14:24:36+05:30*
>>>
>>> 4. Now I want to test the database recovery  after dropping the table4
>>> in step1 to verify that my EPAS restores from the backup in step 3 and time
>>> stamp (01-AUG-24 14:23:22.085076 +05:30,   so that  the restored EPAS
>>> cluster doesn't contain the important_table4.
>>>
>>> 5.  $ sudo -u enterprisedb pgbackrest --stanza=Demo_Repo --delta
>>>  --set=20240729-160137F_20240801-142433I  --target-timeline=current
>>> --type=time  --target="01-AUG-24 14:23:22.085076+05:30"
>>> --target-action=promote restore
>>>  
>>> -
>>> INFO: restore command end: completed successfully (1035ms)
>>>
>>> *ISSUE:I am unable to get the EPAS Server* in running state after
>>> step 5
>>>
>>>  *What am I doing wrong ?  OR am I conceptually wrong ?*
>>>
>>>
>>>
>>>
>>> OUTPUT on executing step 5.
>>>
>>> [root@service01 ~]# sudo -u enterprisedb pgbackrest --stanza=Demo_Repo
>>> --delta --set=20240729-160137F_20240801-142433I  --target-timeline=current
>>> --type=time  --target="01-AUG-24 14:23:22.085076+05:30"
>>> --target-action=promote restore
>>>
>>> 2024-08-01 14:30:03.535 P00   INFO: restore command begin 2.52.1:
>

Re: Logical replication slots on slaves/replicas?

2024-08-01 Thread Piotr Andreassen Blasiak
That´s a great point. Thank you.

Piotr Andreassen Blasiak




> On 1 Aug 2024, at 12:03, Torsten Förtsch  wrote:
> 
> A COMMIT record in the WAL has an LSN. I don't know much about debezuim but 
> wal2json can give you this LSN. Then there is this function, 
> pg_last_wal_replay_lsn(). I think you could run it on the replica to see if 
> you are after the point of commit. If you are, you should be able to see the 
> changes made by the transaction.
> 
> BTW, I am sure I can construct a situation where you see your transaction in 
> the logical wal stream before it becomes visible on the master. In 
> particular, this is possible with a sync replication setup. So, I would not 
> trust to always be able to see the transaction on the master as soon as it is 
> reported committed in the wal stream.
> 
> All the best,
> Torsten
> 
> On Thu, Aug 1, 2024 at 10:42 AM Piotr Andreassen Blasiak  > wrote:
>> Hi,
>> 
>> I know that currently logical replication slots are available only for 
>> primary servers. Is there any plan to add this feature to read slaves as 
>> well? My problem is this:
>> 
>> I want to use debezium to stream changes from postgresql. But, if I stream 
>> changes from the master I can not query my read slaves for related data to 
>> these changes - I need to always query the master which is not scalable. So 
>> either I need a way to be able to know when the change has been propagated 
>> to my read replica so that I can reliably query it, or I am hoping I can 
>> simply read all the changes from the read replica which will mean it is 
>> already up to date when I query it.
>> 
>> Piotr Andreassen Blasiak
>> 
>> 
>> 
>> 



VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Jim Vanns
I have two sessions, each creating a temporary table of the same name -
'foobar'. Why would a vacuum of 'foobar' from one session block the vacuum
of 'foobar' in the other session? They're in separate tablespaces
(pg_temp_NN) and effectively distinct with respect to the sessions.

Cheers,

Jim

-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


Re: VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Tom Lane
Jim Vanns  writes:
> I have two sessions, each creating a temporary table of the same name -
> 'foobar'. Why would a vacuum of 'foobar' from one session block the vacuum
> of 'foobar' in the other session?

That's fairly hard to believe.  Can you provide a self-contained test
case?

regards, tom lane




Re: VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Adrian Klaver

On 8/1/24 12:12, Tom Lane wrote:

Jim Vanns  writes:

I have two sessions, each creating a temporary table of the same name -
'foobar'. Why would a vacuum of 'foobar' from one session block the vacuum
of 'foobar' in the other session?


That's fairly hard to believe.  Can you provide a self-contained test
case?


Would also be nice to know what version of Postgres and where you are 
running the server?




regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Jim Vanns
Sorry, PG15.

Jim

On Thu, 1 Aug 2024 at 20:25, Adrian Klaver 
wrote:

> On 8/1/24 12:12, Tom Lane wrote:
> > Jim Vanns  writes:
> >> I have two sessions, each creating a temporary table of the same name -
> >> 'foobar'. Why would a vacuum of 'foobar' from one session block the
> vacuum
> >> of 'foobar' in the other session?
> >
> > That's fairly hard to believe.  Can you provide a self-contained test
> > case?
>
> Would also be nice to know what version of Postgres and where you are
> running the server?
>
> >
> >   regards, tom lane
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


Re: VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Jim Vanns
I don't at hand, but the query I used to identify this was;

SELECT state, pid, pg_blocking_pids(pid), query
FROM pg_stat_activity
WHERE backend_type='client backend';

On Thu, 1 Aug 2024 at 20:12, Tom Lane  wrote:

> Jim Vanns  writes:
> > I have two sessions, each creating a temporary table of the same name -
> > 'foobar'. Why would a vacuum of 'foobar' from one session block the
> vacuum
> > of 'foobar' in the other session?
>
> That's fairly hard to believe.  Can you provide a self-contained test
> case?
>
> regards, tom lane
>


-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


Re: VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Ron Johnson
PG 15.0 or 15.7?

On Thu, Aug 1, 2024 at 3:58 PM Jim Vanns  wrote:

> Sorry, PG15.
>
> Jim
>
> On Thu, 1 Aug 2024 at 20:25, Adrian Klaver 
> wrote:
>
>> On 8/1/24 12:12, Tom Lane wrote:
>> > Jim Vanns  writes:
>> >> I have two sessions, each creating a temporary table of the same name -
>> >> 'foobar'. Why would a vacuum of 'foobar' from one session block the
>> vacuum
>> >> of 'foobar' in the other session?
>> >
>> > That's fairly hard to believe.  Can you provide a self-contained test
>> > case?
>>
>> Would also be nice to know what version of Postgres and where you are
>> running the server?
>>
>> >
>> >   regards, tom lane
>> >
>> >
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
>
> --
> Jim Vanns
> Principal Production Engineer
> Industrial Light & Magic, London
>


Re: VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Jim Vanns
PG 15.4

On Thu, 1 Aug 2024 at 22:27, Ron Johnson  wrote:

> PG 15.0 or 15.7?
>
> On Thu, Aug 1, 2024 at 3:58 PM Jim Vanns  wrote:
>
>> Sorry, PG15.
>>
>> Jim
>>
>> On Thu, 1 Aug 2024 at 20:25, Adrian Klaver 
>> wrote:
>>
>>> On 8/1/24 12:12, Tom Lane wrote:
>>> > Jim Vanns  writes:
>>> >> I have two sessions, each creating a temporary table of the same name
>>> -
>>> >> 'foobar'. Why would a vacuum of 'foobar' from one session block the
>>> vacuum
>>> >> of 'foobar' in the other session?
>>> >
>>> > That's fairly hard to believe.  Can you provide a self-contained test
>>> > case?
>>>
>>> Would also be nice to know what version of Postgres and where you are
>>> running the server?
>>>
>>> >
>>> >   regards, tom lane
>>> >
>>> >
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>>
>>
>> --
>> Jim Vanns
>> Principal Production Engineer
>> Industrial Light & Magic, London
>>
>

-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


Re: VACUUM on temp table blocks VACUUM on another...

2024-08-01 Thread Adrian Klaver

On 8/1/24 15:13, Jim Vanns wrote:

PG 15.4


Community edition or some variant?




--
Adrian Klaver
adrian.kla...@aklaver.com