1. The tables has no indexes at the time of load.2. The create table and copy
are in the same transaction.
So I guess that's pretty much it. I understand the long time it takes as some
of the tables have 400+ million rows.Also the env is a container and since this
is currently a POC system , n
@lists.postgresql.org
*Subject:* [External] Multiple COPY on the same table
Can I split a large file into multiple files and then run copy using each
file. The table does not contain any
serial or sequence column which may need serialization. Let us say I split
a large file to 4 files. Will the
performance
I guess this should help you, Ravi.
https://www.postgresql.org/docs/10/static/populate.html
On 8/20/18, 10:30 PM, "Christopher Browne" wrote:
On Mon, 20 Aug 2018 at 12:53, Ravi Krishna wrote:
> > What is the goal you are trying to achieve here.
> > To make pgdump/restore
On Mon, 20 Aug 2018 at 12:53, Ravi Krishna wrote:
> > What is the goal you are trying to achieve here.
> > To make pgdump/restore faster?
> > To make replication faster?
> > To make backup faster ?
>
> None of the above.
>
> We got csv files from external vendor which are 880GB in total size, in
> What is the goal you are trying to achieve here.
> To make pgdump/restore faster?
> To make replication faster?
> To make backup faster ?
None of the above.
We got csv files from external vendor which are 880GB in total size, in 44
files. Some of the large tables had COPY running for severa
From: Ravi Krishna
Sent: Monday, August 20, 2018 8:24:35 PM
To: pgsql-general@lists.postgresql.org
Subject: [External] Multiple COPY on the same table
Can I split a large file into multiple files and then run copy using each file.
The table does not contain any