Re: Logical Replication speed-up initial data

2021-08-05 Thread Rick Otten
On Thu, Aug 5, 2021 at 12:57 AM Nikhil Shetty 
wrote:

> Hi,
>
> Thank you for the suggestion.
>
> We tried by dropping indexes and it worked faster compared to what we saw
> earlier. We wanted to know if anybody has done any other changes that helps
> speed-up initial data load without dropping indexes.
>

It would be kind of cool if the database could just "know" that it was an
initial load and automatically suppress FK checks and index updates until
the load is done.  Once complete it would go back and concurrently rebuild
the indexes and validate the FK's.   Then you wouldn't have to manually
drop all of your indexes and add them back and hope you got them all, and
got them right.


Re: Logical Replication speed-up initial data

2021-08-05 Thread Vijaykumar Jain
On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty  wrote:

> Hi,
>
> Thank you for the suggestion.
>
> We tried by dropping indexes and it worked faster compared to what we saw
> earlier. We wanted to know if anybody has done any other changes that helps
> speed-up initial data load without dropping indexes.
>
>
PS: i have not tested this in production level loads, it was just some exp
i did on my laptop.

one option would be to use pglogical extension (this was shared by
Dharmendra in one the previous mails, sharing the same),
and then use pglogical_create_subscriber cli to create the initial copy via
pgbasebackup and then carry on from there.
I ran the test case similar to one below in my local env, and it seems to
work fine. of course i do not have TB worth of load to test, but it looks
promising,
especially since they introduced it to the core.
pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·
2ndQuadrant/pglogical (github.com)

Once you attain some reasonable sync state, you can drop the pglogical
extension, and check if things continue fine.
I have done something similar when upgrading from 9.6 to 11 using pglogical
and then dropping the extension and it was smooth,
maybe you need to try this out and share if things works fine.
and
The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -
Percona Database Performance Blog



Re: Logical Replication speed-up initial data

2021-08-05 Thread Avinash Kumar
Hi,

On Thu, Aug 5, 2021 at 11:28 AM Vijaykumar Jain <
[email protected]> wrote:

> On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty  wrote:
>
>> Hi,
>>
>> Thank you for the suggestion.
>>
>> We tried by dropping indexes and it worked faster compared to what we saw
>> earlier. We wanted to know if anybody has done any other changes that helps
>> speed-up initial data load without dropping indexes.
>>
>>
> You could leverage pg_basbeackup or pg_dump with parallel jobs
taken from a Standby (preferably replication paused if pg_dump, anyways
pg_basebackup should be straight-forward) or taken even from
Primary, for the purpose of initial data load.

As you are able to drop indexes and make some schema changes, I would
assume that you could pause your app temporarily. If that's the case
you may look into the simple steps i am posting here that demonstrates
pg_dump/pg_restore instead.

If you cannot pause the app, then, you could look into how you
could use pg_replication_origin_advance



Step 1 : Pause App
Step 2 : Create Publication on the Primary CREATE PUBLICATION
 FOR ALL TABLES;
Step 3 : Create Logical Replication Slot on the Primary SELECT * FROM
pg_create_logical_replication_slot('', 'pgoutput'); Step 4
: Create Subscription but do not enable the Subscription
CREATE SUBSCRIPTION  CONNECTION
'host= dbname= user=postgres
password=secret port=5432' PUBLICATION 
WITH (copy_data = false, create_slot=false, enabled=false,
slot_name=);

Step 5 : Initiate pg_dump. We can take a parallel backup for a faster
restore.

$ pg_dump -d  -Fd -j 4 -n  -f 
-- If its several hundreds of GBs or TBs, you may rather utilize one of
your Standby that has been paused from replication using -> select
pg_wal_replay_pause();

Step 6 : Don't need to wait until pg_dump completes, you may start the App.
-- Hope the app does not perform changes that impact the pg_dump or
gets blocked due to pg_dump.
Step 7 : Restore the dump if you used pg_dump.
pg_restore -d  -j   Step
8 : Enable subscription.
ALTER SUBSCRIPTION  ENABLE;

If you have not stopped your app then you must advance the lsn using
pg_replication_origin_advance



These are all hand-written steps while drafting this email, so,
please test it on your end as some typos or adjustments are definitely
expected.

PS: i have not tested this in production level loads, it was just some exp
> i did on my laptop.
>
> one option would be to use pglogical extension (this was shared by
> Dharmendra in one the previous mails, sharing the same),
> and then use pglogical_create_subscriber cli to create the initial copy
> via pgbasebackup and then carry on from there.
> I ran the test case similar to one below in my local env, and it seems to
> work fine. of course i do not have TB worth of load to test, but it looks
> promising,
> especially since they introduced it to the core.
> pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·
> 2ndQuadrant/pglogical (github.com)
> 
> Once you attain some reasonable sync state, you can drop the pglogical
> extension, and check if things continue fine.
> I have done something similar when upgrading from 9.6 to 11 using
> pglogical and then dropping the extension and it was smooth,
> maybe you need to try this out and share if things works fine.
> and
> The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -
> Percona Database Performance Blog
> 
>
>

-- 
Regards,
Avinash Vallarapu (Avi)
CEO,
MigOps, Inc.


Re: Logical Replication speed-up initial data

2021-08-05 Thread Nikhil Shetty
Hi Avinash,

Thank you for the detailed explanation.

Indexes were dropped on the destination to increase initial data load
speed. We cannot stop the App on source and it is highly transactional.
I had thought about this method but I am not sure after the pg_restore from
where the logical replication will be started, we cannot afford to lose any
data.

I will give this method a test though and check how it works.

Thanks,
Nikhil

On Thu, Aug 5, 2021 at 8:42 PM Avinash Kumar 
wrote:

> Hi,
>
> On Thu, Aug 5, 2021 at 11:28 AM Vijaykumar Jain <
> [email protected]> wrote:
>
>> On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty 
>> wrote:
>>
>>> Hi,
>>>
>>> Thank you for the suggestion.
>>>
>>> We tried by dropping indexes and it worked faster compared to what we
>>> saw earlier. We wanted to know if anybody has done any other changes that
>>> helps speed-up initial data load without dropping indexes.
>>>
>>>
>> You could leverage pg_basbeackup or pg_dump with parallel jobs
> taken from a Standby (preferably replication paused if pg_dump, anyways
> pg_basebackup should be straight-forward) or taken even from
> Primary, for the purpose of initial data load.
>
> As you are able to drop indexes and make some schema changes, I would
> assume that you could pause your app temporarily. If that's the case
> you may look into the simple steps i am posting here that demonstrates
> pg_dump/pg_restore instead.
>
> If you cannot pause the app, then, you could look into how you
> could use pg_replication_origin_advance
> 
>
>
> Step 1 : Pause App
> Step 2 : Create Publication on the Primary CREATE PUBLICATION
>  FOR ALL TABLES;
> Step 3 : Create Logical Replication Slot on the Primary SELECT * FROM
> pg_create_logical_replication_slot('', 'pgoutput'); Step
> 4 : Create Subscription but do not enable the Subscription
> CREATE SUBSCRIPTION  CONNECTION
> 'host= dbname= user=postgres
> password=secret port=5432' PUBLICATION 
> WITH (copy_data = false, create_slot=false, enabled=false,
> slot_name=);
>
> Step 5 : Initiate pg_dump. We can take a parallel backup for a faster
> restore.
>
> $ pg_dump -d  -Fd -j 4 -n  -f 
> -- If its several hundreds of GBs or TBs, you may rather utilize one of
> your Standby that has been paused from replication using -> select 
> pg_wal_replay_pause();
>
> Step 6 : Don't need to wait until pg_dump completes, you may start the
> App.
> -- Hope the app does not perform changes that impact the pg_dump or
> gets blocked due to pg_dump.
> Step 7 : Restore the dump if you used pg_dump.
> pg_restore -d  -j   Step
> 8 : Enable subscription.
> ALTER SUBSCRIPTION  ENABLE;
>
> If you have not stopped your app then you must advance the lsn using
> pg_replication_origin_advance
> 
>
>
> These are all hand-written steps while drafting this email, so,
> please test it on your end as some typos or adjustments are definitely
> expected.
>
> PS: i have not tested this in production level loads, it was just some exp
>> i did on my laptop.
>>
>> one option would be to use pglogical extension (this was shared by
>> Dharmendra in one the previous mails, sharing the same),
>> and then use pglogical_create_subscriber cli to create the initial copy
>> via pgbasebackup and then carry on from there.
>> I ran the test case similar to one below in my local env, and it seems to
>> work fine. of course i do not have TB worth of load to test, but it looks
>> promising,
>> especially since they introduced it to the core.
>> pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·
>> 2ndQuadrant/pglogical (github.com)
>> 
>> Once you attain some reasonable sync state, you can drop the pglogical
>> extension, and check if things continue fine.
>> I have done something similar when upgrading from 9.6 to 11 using
>> pglogical and then dropping the extension and it was smooth,
>> maybe you need to try this out and share if things works fine.
>> and
>> The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -
>> Percona Database Performance Blog
>> 
>>
>>
>
> --
> Regards,
> Avinash Vallarapu (Avi)
> CEO,
> MigOps, Inc.
>


Re: Logical Replication speed-up initial data

2021-08-05 Thread Nikhil Shetty
Hi Vijaykumar,

Thanks for the details.
In this method you are saying the pg_basebackup will make the initial load
faster ?
We intend to bring only a few tables. Using pg_basebackup will clone an
entire instance.

Thanks,
Nikhil



On Thu, Aug 5, 2021 at 7:57 PM Vijaykumar Jain <
[email protected]> wrote:

> On Thu, 5 Aug 2021 at 10:27, Nikhil Shetty  wrote:
>
>> Hi,
>>
>> Thank you for the suggestion.
>>
>> We tried by dropping indexes and it worked faster compared to what we saw
>> earlier. We wanted to know if anybody has done any other changes that helps
>> speed-up initial data load without dropping indexes.
>>
>>
> PS: i have not tested this in production level loads, it was just some exp
> i did on my laptop.
>
> one option would be to use pglogical extension (this was shared by
> Dharmendra in one the previous mails, sharing the same),
> and then use pglogical_create_subscriber cli to create the initial copy
> via pgbasebackup and then carry on from there.
> I ran the test case similar to one below in my local env, and it seems to
> work fine. of course i do not have TB worth of load to test, but it looks
> promising,
> especially since they introduced it to the core.
> pglogical/010_pglogical_create_subscriber.pl at REL2_x_STABLE ·
> 2ndQuadrant/pglogical (github.com)
> 
> Once you attain some reasonable sync state, you can drop the pglogical
> extension, and check if things continue fine.
> I have done something similar when upgrading from 9.6 to 11 using
> pglogical and then dropping the extension and it was smooth,
> maybe you need to try this out and share if things works fine.
> and
> The 1-2-3 for PostgreSQL Logical Replication Using an RDS Snapshot -
> Percona Database Performance Blog
> 
>
>


Re: Logical Replication speed-up initial data

2021-08-05 Thread Vijaykumar Jain
On Fri, 6 Aug 2021 at 00:15, Nikhil Shetty  wrote:

> Hi Vijaykumar,
>
> Thanks for the details.
> In this method you are saying the pg_basebackup will make the initial load
> faster ?
>
We intend to bring only a few tables. Using pg_basebackup will clone an
> entire instance.
>

yeah. In that case, this will not be useful. I assumed you wanted all
tables.
pglogical/pglogical_create_subscriber.c at REL2_x_STABLE ·
2ndQuadrant/pglogical (github.com)



Re: Logical Replication speed-up initial data

2021-08-05 Thread Jeff Janes
On Thu, Aug 5, 2021 at 12:57 AM Nikhil Shetty 
wrote:

> Hi,
>
> Thank you for the suggestion.
>
> We tried by dropping indexes and it worked faster compared to what we saw
> earlier. We wanted to know if anybody has done any other changes that helps
> speed-up initial data load without dropping indexes.
>

If index maintenance is the bottleneck, nothing but dropping the indexes is
likely to be very effective.  Just make sure not to drop the replica
identity index.  If you do that, then the entire sync will abort and
rollback once it gets to the end, if the master had had any UPDATE or
DELETE activity on that table during the sync period.  (v14 will remove
that problem--replication still won't proceed until you have the index, but
previous synced work will not be lost while it waits for you to build the
index.)

Syncing with the index still in place might go faster if shared_buffers is
large enough to hold the entire incipient index(es) simultaneously.  It
might be worthwhile to make shared_buffers be a large fraction of RAM (like
90%) if doing so will enable the entire index to fit into shared_buffers
and if nothing else significant is running on the server.  You probably
wouldn't want that as a permanent setting though.

Cheers,

Jeff