Import Database

2019-05-05 Thread Sathish Kumar
Hi,

I am trying to import a database of size 300+gb to another server. It's
taking a long time.

4vCPU
15GB RAM

psql -h newserver -U dbuser -d production -W < prod.sql



Is there a way to speed up the importing process by tweaking Postgresql
config like maintenance_workmem, work_mem, shared_buffers etc.,


Re: Import Database

2019-05-05 Thread Ravi Krishna
IMO you are using the slowest tool to import.

Just one quick question: Why can't you take cluster backup using any of the 
tools available and then drop all 
unwanted databases after you import the cluster.

pg_basebackup will do a good job.



Re: Import Database

2019-05-05 Thread Andreas Kretschmer




Am 05.05.19 um 18:47 schrieb Sathish Kumar:
Is there a way to speed up the importing process by tweaking 
Postgresql config like maintenance_workmem, work_mem, shared_buffers etc.,


sure, take the dump in custom-format and use pg_restore with -j 
. You can increase maintenance_work_mem maybe to 4GB to 
speedup index creation.



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Import Database

2019-05-05 Thread Ron

On 5/5/19 12:20 PM, Andreas Kretschmer wrote:



Am 05.05.19 um 18:47 schrieb Sathish Kumar:
Is there a way to speed up the importing process by tweaking Postgresql 
config like maintenance_workmem, work_mem, shared_buffers etc.,


sure, take the dump in custom-format and use pg_restore with -j 
.


Custom format or directory format?

--
Angular momentum makes the world go 'round.




Re: [External] Re: Import Database

2019-05-05 Thread Vijaykumar Jain
Yes.
I do bump up maintenance_work_mem temporarily during a restore.
it helps in rebuilding on indexes a little faster.
Turning fsync off during restore will help the restore a little fast too
but in case of any crash you may have to restart the restore from scratch.
Also do have the option to take pg_dump and run pg_restore ? or you just
the have the raw sql dump to work with?
if you have the option of taking a dump again, you can try using pg_dump
and pg_restore with -Fc (custom format) and -j n (parallel)  option along
with temp bump in maint memory.
This will make the restore a little faster that raw sql dump I think.
If you are on pg10 or above? you can use logical replication to mirror  the
database.
There are blogs by several people explaining how to do that, that may be
helpful.


On Sun, 5 May 2019 at 10:29 PM Ravi Krishna  wrote:

> IMO you are using the slowest tool to import.
>
> Just one quick question: Why can't you take cluster backup using any of
> the tools available and then drop all
> unwanted databases after you import the cluster.
>
> pg_basebackup will do a good job.
>
> --

Regards,
Vijay


Re: [External] Re: Import Database

2019-05-05 Thread Sathish Kumar
Hi,

I am trying to export our database in GCE instance to Google Cloud SQL.

Below are the commands used to export/import the database. I am exporting
only 1 database which is required.

Export:

pg_dump -h olddbserver -U dbuser --format=plain --no-owner --no-acl
production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1
EXTENSION/g' > prod.sql


Import:

psql -h newcloudsqldbserver -U dbuser -d production -W < prod.sql


On Mon, May 6, 2019, 1:32 AM Vijaykumar Jain  wrote:

> Yes.
> I do bump up maintenance_work_mem temporarily during a restore.
> it helps in rebuilding on indexes a little faster.
> Turning fsync off during restore will help the restore a little fast too
> but in case of any crash you may have to restart the restore from scratch.
> Also do have the option to take pg_dump and run pg_restore ? or you just
> the have the raw sql dump to work with?
> if you have the option of taking a dump again, you can try using pg_dump
> and pg_restore with -Fc (custom format) and -j n (parallel)  option along
> with temp bump in maint memory.
> This will make the restore a little faster that raw sql dump I think.
> If you are on pg10 or above? you can use logical replication to mirror
>  the database.
> There are blogs by several people explaining how to do that, that may be
> helpful.
>
>
> On Sun, 5 May 2019 at 10:29 PM Ravi Krishna  wrote:
>
>> IMO you are using the slowest tool to import.
>>
>> Just one quick question: Why can't you take cluster backup using any of
>> the tools available and then drop all
>> unwanted databases after you import the cluster.
>>
>> pg_basebackup will do a good job.
>>
>> --
>
> Regards,
> Vijay
>


Re: [External] Re: Import Database

2019-05-05 Thread Sathish Kumar
Hi All,

Postgresql version: 9.6

On Mon, May 6, 2019, 7:14 AM Sathish Kumar  wrote:

> Hi,
>
> I am trying to export our database in GCE instance to Google Cloud SQL.
>
> Below are the commands used to export/import the database. I am exporting
> only 1 database which is required.
>
> Export:
>
> pg_dump -h olddbserver -U dbuser --format=plain --no-owner --no-acl 
> production | sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 
> EXTENSION/g' > prod.sql
>
>
> Import:
>
> psql -h newcloudsqldbserver -U dbuser -d production -W < prod.sql
>
>
> On Mon, May 6, 2019, 1:32 AM Vijaykumar Jain  wrote:
>
>> Yes.
>> I do bump up maintenance_work_mem temporarily during a restore.
>> it helps in rebuilding on indexes a little faster.
>> Turning fsync off during restore will help the restore a little fast too
>> but in case of any crash you may have to restart the restore from scratch.
>> Also do have the option to take pg_dump and run pg_restore ? or you just
>> the have the raw sql dump to work with?
>> if you have the option of taking a dump again, you can try using pg_dump
>> and pg_restore with -Fc (custom format) and -j n (parallel)  option along
>> with temp bump in maint memory.
>> This will make the restore a little faster that raw sql dump I think.
>> If you are on pg10 or above? you can use logical replication to mirror
>>  the database.
>> There are blogs by several people explaining how to do that, that may be
>> helpful.
>>
>>
>> On Sun, 5 May 2019 at 10:29 PM Ravi Krishna  wrote:
>>
>>> IMO you are using the slowest tool to import.
>>>
>>> Just one quick question: Why can't you take cluster backup using any of
>>> the tools available and then drop all
>>> unwanted databases after you import the cluster.
>>>
>>> pg_basebackup will do a good job.
>>>
>>> --
>>
>> Regards,
>> Vijay
>>
>