[no subject]

2022-07-26 Thread Rama Krishnan
Hi all,

How to take a table backup using directory format?
I am having huge size of table when I am using a pg_dump it tooks more time
kindly suggest me


Re:

2022-07-26 Thread Adrian Klaver

On 7/26/22 06:27, Rama Krishnan wrote:

Hi all,

How to take a table backup using directory format?


pg_dump -d  -U   -t  -Fd -f 

I am having huge size of table when I am using a pg_dump it tooks more 
time kindly suggest me


Not sure what the above means, so:

What is size of table?

What sort of time interval are you seeing?

What problem is it causing?

--
Adrian Klaver
adrian.kla...@aklaver.com




Re:

2022-07-26 Thread Rama Krishnan
Hi Adrian


Thanks for your reply,

My actual db size was 320G while I am taking custom format and moving into
directly S3 it took more than one day so I am trying to use directory
format because  parllel option (-j option) supports ik directory format.

What is size of table?

I m having two Database example

01. Cricket 320G
02.badminton 250G

What sort of time interval are you seeing?

I am doing purge data to keep 1 year data n db more than year data I am
going yo take dump backup for future reports purpose.

What problem is it causing?

The normal custom format backup took more than day

On Tue, 26 Jul, 2022, 20:34 Adrian Klaver, 
wrote:

> On 7/26/22 06:27, Rama Krishnan wrote:
> > Hi all,
> >
> > How to take a table backup using directory format?
>
> pg_dump -d  -U   -t  -Fd -f 
>
> > I am having huge size of table when I am using a pg_dump it tooks more
> > time kindly suggest me
>
> Not sure what the above means, so:
>
> What is size of table?
>
> What sort of time interval are you seeing?
>
> What problem is it causing?
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re:

2022-07-26 Thread Adrian Klaver

On 7/26/22 08:15, Rama Krishnan wrote:

Hi Adrian


Thanks for your reply,

My actual db size was 320G while I am taking custom format and moving 
into directly S3 it took more than one day so I am trying to use 
directory format because  parllel option (-j option) supports ik 
directory format.


Is the database in AWS also or is it locally hosted?

In either case what is the network distance that the data has to cross?

What is the network speed of the slowest link?



What is size of table?

I m having two Database example

01. Cricket 320G
02.badminton 250G


So you are talking about an entire database not a single table, correct?



What sort of time interval are you seeing?

I am doing purge data to keep 1 year data n db more than year data I am 
going yo take dump backup for future reports purpose.


What problem is it causing?

The normal custom format backup took more than day




--
Adrian Klaver
adrian.kla...@aklaver.com




Re:

2022-07-26 Thread Ron

On 7/26/22 10:22, Adrian Klaver wrote:

On 7/26/22 08:15, Rama Krishnan wrote:

Hi Adrian


Thanks for your reply,

My actual db size was 320G while I am taking custom format and moving 
into directly S3 it took more than one day so I am trying to use 
directory format because  parllel option (-j option) supports ik 
directory format.


Is the database in AWS also or is it locally hosted?

In either case what is the network distance that the data has to cross?

What is the network speed of the slowest link?



What is size of table?

I m having two Database example

01. Cricket 320G
02.badminton 250G


So you are talking about an entire database not a single table, correct?


In a private email, he said that this is what he's trying:
Pg_dump -h endpoint -U postgres Fd - d cricket | aws cp - s3://dump/cricket.dump

It failed for obvious reasons.


--
Angular momentum makes the world go 'round.




Re:

2022-07-26 Thread Adrian Klaver

On 7/26/22 9:29 AM, Ron wrote:

On 7/26/22 10:22, Adrian Klaver wrote:

On 7/26/22 08:15, Rama Krishnan wrote:

Hi Adrian





What is size of table?

I m having two Database example

01. Cricket 320G
02.badminton 250G


So you are talking about an entire database not a single table, correct?


In a private email, he said that this is what he's trying:
Pg_dump -h endpoint -U postgres Fd - d cricket | aws cp - 
s3://dump/cricket.dump


It failed for obvious reasons.



From what I gather it did not fail, it just took a long time. Not sure 
adding -j to the above will improve things, pretty sure the choke point 
is still going to be aws cp.


Rama if you have the space would it not be better to dump locally using 
-Fc to get a compressed format and the upload that to s3 as a separate 
process?



--
Adrian Klaver
adrian.kla...@aklaver.com




Re:

2022-07-26 Thread hubert depesz lubaczewski
On Tue, Jul 26, 2022 at 10:48:47AM -0700, Adrian Klaver wrote:
> On 7/26/22 9:29 AM, Ron wrote:
> > On 7/26/22 10:22, Adrian Klaver wrote:
> > > On 7/26/22 08:15, Rama Krishnan wrote:
> > > > Hi Adrian
> > > > 
> > > > 
> 
> > > > What is size of table?
> > > > 
> > > > I m having two Database example
> > > > 
> > > > 01. Cricket 320G
> > > > 02.badminton 250G
> > > 
> > > So you are talking about an entire database not a single table, correct?
> > 
> > In a private email, he said that this is what he's trying:
> > Pg_dump -h endpoint -U postgres Fd - d cricket | aws cp -
> > s3://dump/cricket.dump
> > 
> > It failed for obvious reasons.
> From what I gather it did not fail, it just took a long time. Not sure
> adding -j to the above will improve things, pretty sure the choke point is
> still going to be aws cp.

It's really hard to say what is happening, because the command, as shown
wouldn't even work.

Starting from Pg_dump vs. pg_dump, space between `-` and `d`, "Fd" as
argument, or even the idea that you *can* make -Fd dumps to stdout and
pass it to aws cp.

depesz