I'm trying to troubleshoot the slowness issue with pg_restore and stumbled
across a recent post about pg_restore scanning the whole file :
> "scanning happens in a very inefficient way, with many seek calls and
small block reads. Try strace to see them. This initial phase can take
hours in a huge
n your post.
Regards,
Rianto
On Fri, 19 Sept 2025 at 07:45, Adrian Klaver
wrote:
>
>
> On 9/18/25 2:36 PM, R Wahyudi wrote:
> > I've been given a database dump file daily and I've been asked to
> > restore it.
> > I tried everything I could to speed up
On Fri, 19 Sept 2025 at 01:54, Adrian Klaver
wrote:
> On 9/18/25 05:58, R Wahyudi wrote:
> > Hi All,
> >
> > Thanks for the quick and accurate response! I never been so happy
> > seeing IOwait on my system!
>
> Because?
>
> What did you find?
>
> >
then tar the directory
> of compressed files using the --remove-files option.)
>
> On Tue, Sep 16, 2025 at 10:50 PM R Wahyudi wrote:
>
>> Sorry for not including the full command - yes , its piping to a
>> compression command :
>> | lbzip2 -n --best >
>>
pg_dump was done using the following command :
pg_dump -Fc -Z 0 -h -U -w -d
On Wed, 17 Sept 2025 at 08:36, Adrian Klaver
wrote:
> On 9/16/25 15:25, R Wahyudi wrote:
> >
> > I'm trying to troubleshoot the slowness issue with pg_restore and
> > stumbled across a re
If so, then that's the problem.
>
> pg_dump directly to a file puts file offsets in the TOC.
>
> This how I do custom dumps:
> cd $BackupDir
> pg_dump -Fc --compress=zstd:long -v -d${db} -f ${db}.dump 2> ${db}.log
>
> On Tue, Sep 16, 2025 at 8:54 PM R Wahyudi wrote