procedure using CURSOR to insert is extremely slow

2021-04-08 Thread Szalontai Zoltán
Hi,

 

We have a Class db.t2.medium database on AWS.

We use a procedure to transfer data records from the Source to the Target
Schema.

Transfers are identified by the log_id field in the target table.

 

The procedure is:

1 all records are deleted from the Target table with the actual log_id value

2 a complicated SELECT (numerous tables are joined) is created on the Source
system 

3 a cursor is defined based on this SELECT

4 we go trough the CURSOR and insert new records into the Target table with
this log_id

 

(Actually we have about 100 tables in the Target schema and the size of the
database backup file is about 1GByte. But we do the same for all the Target
tables.)

 

Our procedure is extremely slow for the first run: 3 days for the 100
tables. For the second and all subsequent run it is fast enough (15
minutes). 

The only difference between the first run and all the others is that in the
first run there are no records in the Target schema with this log_id.

 

It seems, that in the first step the DELETE operation makes free some
"space", and the INSET operation in the 4. step can reuse this space. But if
no records are deleted in the first step, the procedure is extremely slow.

 

To speed up the first run we found the following workaround:

We inserted dummy records into the Target tables with the proper log_id, and
really the first run became very fast again.

 

Is there any "normal" way to speed up this procedure?

In the production environment there will be only "first runs", the same
log_id will never be used again.

 

 

thank

Zoltán

 

 



str_aggr function not wokring

2021-04-08 Thread aditya desai
Hi,
I need to combine results of multiple rows in one row. I get below error.
Could you please help.

Query:

select string_agg((select '**' || P.PhaseName || ' - ' ||
R.Recommendation AS "ABC" from tblAssessmentRecommendation
R,tblAssessmentPhases P
where  R.PhaseID = P.PhaseID  Order BY P.sortOrder DESC),' ')

Error:

ERROR: more than one row returned by a subquery used as an expression SQL
state: 21000

Regards,
Aditya.


Re: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread HER
If you do a delete on the first step without any statistics, you request will 
do a full scan of the table, which will be slower.

Did you check the different execution plans ?



From: Szalontai Zoltán 
Sent: Thursday, April 8, 2021 01:24 PM
To: [email protected] 

Subject: procedure using CURSOR to insert is extremely slow


Hi,



We have a Class db.t2.medium database on AWS.

We use a procedure to transfer data records from the Source to the Target 
Schema.

Transfers are identified by the log_id field in the target table.



The procedure is:

1 all records are deleted from the Target table with the actual log_id value

2 a complicated SELECT (numerous tables are joined) is created on the Source 
system

3 a cursor is defined based on this SELECT

4 we go trough the CURSOR and insert new records into the Target table with 
this log_id



(Actually we have about 100 tables in the Target schema and the size of the 
database backup file is about 1GByte. But we do the same for all the Target 
tables.)



Our procedure is extremely slow for the first run: 3 days for the 100 tables. 
For the second and all subsequent run it is fast enough (15 minutes).

The only difference between the first run and all the others is that in the 
first run there are no records in the Target schema with this log_id.



It seems, that in the first step the DELETE operation makes free some “space”, 
and the INSET operation in the 4. step can reuse this space. But if no records 
are deleted in the first step, the procedure is extremely slow.



To speed up the first run we found the following workaround:

We inserted dummy records into the Target tables with the proper log_id, and 
really the first run became very fast again.



Is there any “normal” way to speed up this procedure?

In the production environment there will be only “first runs”, the same log_id 
will never be used again.





thank

Zoltán






RE: str_aggr function not wokring

2021-04-08 Thread Patrick FICHE
From: aditya desai 
Sent: Thursday, April 8, 2021 1:32 PM
To: Pgsql Performance 
Subject: str_aggr function not wokring

Hi,
I need to combine results of multiple rows in one row. I get below error. Could 
you please help.

Query:

select string_agg((select '**' || P.PhaseName || ' - ' || R.Recommendation 
AS "ABC" from tblAssessmentRecommendation R,tblAssessmentPhases P
where  R.PhaseID = P.PhaseID  Order BY P.sortOrder DESC),' ')

Error:

ERROR: more than one row returned by a subquery used as an expression SQL 
state: 21000

Regards,
Aditya.


Hi,

I would suggest you to try something like this instead

select string_agg( '**' || P.PhaseName || ' - ' || R.Recommendation ''  
ORDER BY P.sortOrder DESC ) AS "ABC"
from tblAssessmentRecommendation R,tblAssessmentPhases P
where  R.PhaseID = P.PhaseID

Regards,

Patrick



RE: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread Szalontai Zoltán
How to check execution plans?

We are in the Loop of the Cursor, and we do insert operations in it.

 

From: Hervé Schweitzer (HER)  
Sent: Thursday, April 8, 2021 1:40 PM
To: Szalontai Zoltán ;
[email protected]
Subject: Re: procedure using CURSOR to insert is extremely slow

 

If you do a delete on the first step without any statistics, you request
will do a full scan of the table, which will be slower.

 

Did you check the different execution plans ? 

 

  _  

From: Szalontai Zoltán mailto:[email protected]> >
Sent: Thursday, April 8, 2021 01:24 PM
To: [email protected]

mailto:[email protected]> >
Subject: procedure using CURSOR to insert is extremely slow 

 

Hi,

 

We have a Class db.t2.medium database on AWS.

We use a procedure to transfer data records from the Source to the Target
Schema.

Transfers are identified by the log_id field in the target table.

 

The procedure is:

1 all records are deleted from the Target table with the actual log_id value

2 a complicated SELECT (numerous tables are joined) is created on the Source
system 

3 a cursor is defined based on this SELECT

4 we go trough the CURSOR and insert new records into the Target table with
this log_id

 

(Actually we have about 100 tables in the Target schema and the size of the
database backup file is about 1GByte. But we do the same for all the Target
tables.)

 

Our procedure is extremely slow for the first run: 3 days for the 100
tables. For the second and all subsequent run it is fast enough (15
minutes). 

The only difference between the first run and all the others is that in the
first run there are no records in the Target schema with this log_id.

 

It seems, that in the first step the DELETE operation makes free some
"space", and the INSET operation in the 4. step can reuse this space. But if
no records are deleted in the first step, the procedure is extremely slow.

 

To speed up the first run we found the following workaround:

We inserted dummy records into the Target tables with the proper log_id, and
really the first run became very fast again.

 

Is there any "normal" way to speed up this procedure?

In the production environment there will be only "first runs", the same
log_id will never be used again.

 

 

thank

Zoltán

 

 



Re: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread Milos Babic
Hi Zoltan,

is there any particular reason why you don't do a bulk insert as:
   insert into target_table
   select ... from source_table(s) (with joins etc)

Regards,
Milos



On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <
[email protected]> wrote:

> Hi,
>
>
>
> We have a Class db.t2.medium database on AWS.
>
> We use a procedure to transfer data records from the Source to the Target
> Schema.
>
> Transfers are identified by the log_id field in the target table.
>
>
>
> The procedure is:
>
> 1 all records are deleted from the Target table with the actual log_id
> value
>
> 2 a complicated SELECT (numerous tables are joined) is created on the
> Source system
>
> 3 a cursor is defined based on this SELECT
>
> 4 we go trough the CURSOR and insert new records into the Target table
> with this log_id
>
>
>
> (Actually we have about 100 tables in the Target schema and the size of
> the database backup file is about 1GByte. But we do the same for all the
> Target tables.)
>
>
>
> Our procedure is extremely slow for the first run: 3 days for the 100
> tables. For the second and all subsequent run it is fast enough (15
> minutes).
>
> The only difference between the first run and all the others is that in
> the first run there are no records in the Target schema with this log_id.
>
>
>
> It seems, that in the first step the DELETE operation makes free some
> “space”, and the INSET operation in the 4. step can reuse this space. But
> if no records are deleted in the first step, the procedure is extremely
> slow.
>
>
>
> To speed up the first run we found the following workaround:
>
> We inserted dummy records into the Target tables with the proper log_id,
> and really the first run became very fast again.
>
>
>
> Is there any “normal” way to speed up this procedure?
>
> In the production environment there will be only “first runs”, the same
> log_id will never be used again.
>
>
>
>
>
> thank
>
> Zoltán
>
>
>
>
>


-- 
Milos Babic
http://www.linkedin.com/in/milosbabic


Re: str_aggr function not wokring

2021-04-08 Thread aditya desai
Thanks Patrick. I used WITH Query and feeded that output to string_aggr
which worked. However it is giving performance issues. Will check on that.
THanks.

On Thu, Apr 8, 2021 at 5:11 PM Patrick FICHE 
wrote:

> *From:* aditya desai 
> *Sent:* Thursday, April 8, 2021 1:32 PM
> *To:* Pgsql Performance 
> *Subject:* str_aggr function not wokring
>
>
>
> Hi,
>
> I need to combine results of multiple rows in one row. I get below error.
> Could you please help.
>
>
>
> Query:
>
>
>
> select string_agg((select '**' || P.PhaseName || ' - ' ||
> R.Recommendation AS "ABC" from tblAssessmentRecommendation
> R,tblAssessmentPhases P
>
> where  R.PhaseID = P.PhaseID  Order BY P.sortOrder DESC),' ')
>
>
>
> Error:
>
>
>
> ERROR: more than one row returned by a subquery used as an expression SQL
> state: 21000
>
>
>
> Regards,
>
> Aditya.
>
>
>
>
>
> Hi,
>
>
>
> I would suggest you to try something like this instead
>
>
>
> select string_agg( '**' || P.PhaseName || ' - ' || R.Recommendation ''
>  ORDER BY P.sortOrder DESC ) AS "ABC"
>
> from tblAssessmentRecommendation R,tblAssessmentPhases P
>
> where  R.PhaseID = P.PhaseID
>
>
>
> Regards,
>
>
>
> Patrick
>
>
>


Re: str_aggr function not wokring

2021-04-08 Thread aditya desai
Sure!! Thanks for the response. Apologies for multiple questions. Faced
this during high priority MSSQL to PostgreSQL migration. Did not see any
equivalent of XML PATH which would give desired results. Finally was able
to resolve the issue by rewriting the Proc using WITH and string_aggr in
combination. However still facing performance issues in the same. Will
investigate it.

On Thu, Apr 8, 2021 at 5:08 PM Mike Sofen  wrote:

> You realize that there are a million answers to your questions online?
> Are you doing any google searches before bothering this list with basic
> questions?  I personally never email this list until I’ve exhausted all
> searches and extensive trial and error, as do most practitioners.  This
> list is incredibly patient and polite, and...there are limits.  Please
> consider doing more research before asking a question.  In your example
> below, you’re getting a basic subquery error – research how to fix that.
> Mike
>
>
>
> *From:* aditya desai 
> *Sent:* Thursday, April 08, 2021 4:32 AM
> *To:* Pgsql Performance 
> *Subject:* str_aggr function not wokring
>
>
>
> Hi,
>
> I need to combine results of multiple rows in one row. I get below error.
> Could you please help.
>
>
>
> Query:
>
>
>
> select string_agg((select '**' || P.PhaseName || ' - ' ||
> R.Recommendation AS "ABC" from tblAssessmentRecommendation
> R,tblAssessmentPhases P
>
> where  R.PhaseID = P.PhaseID  Order BY P.sortOrder DESC),' ')
>
>
>
> Error:
>
>
>
> ERROR: more than one row returned by a subquery used as an expression SQL
> state: 21000
>
>
>
> Regards,
>
> Aditya.
>


RE: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread Szalontai Zoltán
Hi Milos,

 

Inside the loops there are frequently if / else branches value transformations 
used.

We could not solve it without using a cursor.

 

Regards,

Zoltán

 

From: Milos Babic  
Sent: Thursday, April 8, 2021 2:31 PM
To: Szalontai Zoltán 
Cc: Pgsql Performance 
Subject: Re: procedure using CURSOR to insert is extremely slow

 

Hi Zoltan,

 

is there any particular reason why you don't do a bulk insert as:

   insert into target_table

   select ... from source_table(s) (with joins etc)

 

Regards,

Milos

 

 

 

On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán mailto:[email protected]> > wrote:

Hi,

 

We have a Class db.t2.medium database on AWS.

We use a procedure to transfer data records from the Source to the Target 
Schema.

Transfers are identified by the log_id field in the target table.

 

The procedure is:

1 all records are deleted from the Target table with the actual log_id value

2 a complicated SELECT (numerous tables are joined) is created on the Source 
system 

3 a cursor is defined based on this SELECT

4 we go trough the CURSOR and insert new records into the Target table with 
this log_id

 

(Actually we have about 100 tables in the Target schema and the size of the 
database backup file is about 1GByte. But we do the same for all the Target 
tables.)

 

Our procedure is extremely slow for the first run: 3 days for the 100 tables. 
For the second and all subsequent run it is fast enough (15 minutes). 

The only difference between the first run and all the others is that in the 
first run there are no records in the Target schema with this log_id.

 

It seems, that in the first step the DELETE operation makes free some “space”, 
and the INSET operation in the 4. step can reuse this space. But if no records 
are deleted in the first step, the procedure is extremely slow.

 

To speed up the first run we found the following workaround:

We inserted dummy records into the Target tables with the proper log_id, and 
really the first run became very fast again.

 

Is there any “normal” way to speed up this procedure?

In the production environment there will be only “first runs”, the same log_id 
will never be used again.

 

 

thank

Zoltán

 

 




 

-- 

Milos Babic

http://www.linkedin.com/in/milosbabic



RE: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread Mike Sofen
Hi Zoltan,

 

I haven’t needed to use a cursor in 20 years of sometimes very complex sql 
coding. 

 

Why?  Cursors result in RBAR (row by agonizing row) operation which eliminates 
the power of set-based sql operations.  Performance will always suffer – 
sometimes to extremes.  I’m all about fastest possible performance for a given 
sql solution.

 

How?  There have been times I’ve initially said a similar thing – “I don’t see 
how to solve this without a cursor”.  When I hit that point, I stop and 
decompose the problem into simpler bits, and soak on it and always – literally 
always – a solution will appear.  

 

It’s all in how we envision the solution, especially with Postgres and its 
amazing ecosystem of sql functions.  We really can do almost anything.  Since 
the code is obviously way to complex to post here, I’d simply encourage you to 
rethink how you’re approaching the solution.

 

Mike

 

From: Szalontai Zoltán  
Sent: Thursday, April 08, 2021 6:57 AM
To: 'Milos Babic' 
Cc: 'Pgsql Performance' 
Subject: RE: procedure using CURSOR to insert is extremely slow

 

Hi Milos,

 

Inside the loops there are frequently if / else branches value transformations 
used.

We could not solve it without using a cursor.

 

Regards,

Zoltán

 

From: Milos Babic mailto:[email protected]> > 
Sent: Thursday, April 8, 2021 2:31 PM
To: Szalontai Zoltán mailto:[email protected]> >
Cc: Pgsql Performance mailto:[email protected]> >
Subject: Re: procedure using CURSOR to insert is extremely slow

 

Hi Zoltan,

 

is there any particular reason why you don't do a bulk insert as:

   insert into target_table

   select ... from source_table(s) (with joins etc)

 

Regards,

Milos

 

 

 

On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán mailto:[email protected]> > wrote:

Hi,

 

We have a Class db.t2.medium database on AWS.

We use a procedure to transfer data records from the Source to the Target 
Schema.

Transfers are identified by the log_id field in the target table.

 

The procedure is:

1 all records are deleted from the Target table with the actual log_id value

2 a complicated SELECT (numerous tables are joined) is created on the Source 
system 

3 a cursor is defined based on this SELECT

4 we go trough the CURSOR and insert new records into the Target table with 
this log_id

 

(Actually we have about 100 tables in the Target schema and the size of the 
database backup file is about 1GByte. But we do the same for all the Target 
tables.)

 

Our procedure is extremely slow for the first run: 3 days for the 100 tables. 
For the second and all subsequent run it is fast enough (15 minutes). 

The only difference between the first run and all the others is that in the 
first run there are no records in the Target schema with this log_id.

 

It seems, that in the first step the DELETE operation makes free some “space”, 
and the INSET operation in the 4. step can reuse this space. But if no records 
are deleted in the first step, the procedure is extremely slow.

 

To speed up the first run we found the following workaround:

We inserted dummy records into the Target tables with the proper log_id, and 
really the first run became very fast again.

 

Is there any “normal” way to speed up this procedure?

In the production environment there will be only “first runs”, the same log_id 
will never be used again.

 

 

thank

Zoltán

 

 




 

-- 

Milos Babic

http://www.linkedin.com/in/milosbabic



Re: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread Milos Babic
Hi Zoltan,

you should try to rethink the logic behind the query.
Numerous if/then/else can be transformed into case-when, or a bunch of
unions, which, I'm 100% certain will do much better than row-by-row
insertion.

However, this is a general note.
Still doesn't explain why it takes faster to insert with deletions (?!!)
Is there any chance the set you inserting in the second run is smaller
(e.g. only a fraction of the original one)?

If possible, you can send over a fragment of the code, and we can look into
it.

regards,
Milos






On Thu, Apr 8, 2021 at 3:56 PM Szalontai Zoltán <
[email protected]> wrote:

> Hi Milos,
>
>
>
> Inside the loops there are frequently if / else branches value
> transformations used.
>
> We could not solve it without using a cursor.
>
>
>
> Regards,
>
> Zoltán
>
>
>
> *From:* Milos Babic 
> *Sent:* Thursday, April 8, 2021 2:31 PM
> *To:* Szalontai Zoltán 
> *Cc:* Pgsql Performance 
> *Subject:* Re: procedure using CURSOR to insert is extremely slow
>
>
>
> Hi Zoltan,
>
>
>
> is there any particular reason why you don't do a bulk insert as:
>
>insert into target_table
>
>select ... from source_table(s) (with joins etc)
>
>
>
> Regards,
>
> Milos
>
>
>
>
>
>
>
> On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán <
> [email protected]> wrote:
>
> Hi,
>
>
>
> We have a Class db.t2.medium database on AWS.
>
> We use a procedure to transfer data records from the Source to the Target
> Schema.
>
> Transfers are identified by the log_id field in the target table.
>
>
>
> The procedure is:
>
> 1 all records are deleted from the Target table with the actual log_id
> value
>
> 2 a complicated SELECT (numerous tables are joined) is created on the
> Source system
>
> 3 a cursor is defined based on this SELECT
>
> 4 we go trough the CURSOR and insert new records into the Target table
> with this log_id
>
>
>
> (Actually we have about 100 tables in the Target schema and the size of
> the database backup file is about 1GByte. But we do the same for all the
> Target tables.)
>
>
>
> Our procedure is extremely slow for the first run: 3 days for the 100
> tables. For the second and all subsequent run it is fast enough (15
> minutes).
>
> The only difference between the first run and all the others is that in
> the first run there are no records in the Target schema with this log_id.
>
>
>
> It seems, that in the first step the DELETE operation makes free some
> “space”, and the INSET operation in the 4. step can reuse this space. But
> if no records are deleted in the first step, the procedure is extremely
> slow.
>
>
>
> To speed up the first run we found the following workaround:
>
> We inserted dummy records into the Target tables with the proper log_id,
> and really the first run became very fast again.
>
>
>
> Is there any “normal” way to speed up this procedure?
>
> In the production environment there will be only “first runs”, the same
> log_id will never be used again.
>
>
>
>
>
> thank
>
> Zoltán
>
>
>
>
>
>
>
>
> --
>
> Milos Babic
>
> http://www.linkedin.com/in/milosbabic
>


-- 
Milos Babic
http://www.linkedin.com/in/milosbabic


RE: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread Szalontai Zoltán
Hi Milos,

 

I discuss this kind of rethinking with the team.

 

Perhaps I can copy our database on AWS for you, and you can check it.

 

thanks,

Zoltán

 

From: Milos Babic  
Sent: Thursday, April 8, 2021 8:22 PM
To: Szalontai Zoltán 
Cc: Pgsql Performance 
Subject: Re: procedure using CURSOR to insert is extremely slow

 

Hi Zoltan,

 

you should try to rethink the logic behind the query.

Numerous if/then/else can be transformed into case-when, or a bunch of unions, 
which, I'm 100% certain will do much better than row-by-row insertion.

 

However, this is a general note.

Still doesn't explain why it takes faster to insert with deletions (?!!)

Is there any chance the set you inserting in the second run is smaller (e.g. 
only a fraction of the original one)?

 

If possible, you can send over a fragment of the code, and we can look into it.

 

regards,

Milos

 

 

 

 

 

 

On Thu, Apr 8, 2021 at 3:56 PM Szalontai Zoltán mailto:[email protected]> > wrote:

Hi Milos,

 

Inside the loops there are frequently if / else branches value transformations 
used.

We could not solve it without using a cursor.

 

Regards,

Zoltán

 

From: Milos Babic mailto:[email protected]> > 
Sent: Thursday, April 8, 2021 2:31 PM
To: Szalontai Zoltán mailto:[email protected]> >
Cc: Pgsql Performance mailto:[email protected]> >
Subject: Re: procedure using CURSOR to insert is extremely slow

 

Hi Zoltan,

 

is there any particular reason why you don't do a bulk insert as:

   insert into target_table

   select ... from source_table(s) (with joins etc)

 

Regards,

Milos

 

 

 

On Thu, Apr 8, 2021 at 1:24 PM Szalontai Zoltán mailto:[email protected]> > wrote:

Hi,

 

We have a Class db.t2.medium database on AWS.

We use a procedure to transfer data records from the Source to the Target 
Schema.

Transfers are identified by the log_id field in the target table.

 

The procedure is:

1 all records are deleted from the Target table with the actual log_id value

2 a complicated SELECT (numerous tables are joined) is created on the Source 
system 

3 a cursor is defined based on this SELECT

4 we go trough the CURSOR and insert new records into the Target table with 
this log_id

 

(Actually we have about 100 tables in the Target schema and the size of the 
database backup file is about 1GByte. But we do the same for all the Target 
tables.)

 

Our procedure is extremely slow for the first run: 3 days for the 100 tables. 
For the second and all subsequent run it is fast enough (15 minutes). 

The only difference between the first run and all the others is that in the 
first run there are no records in the Target schema with this log_id.

 

It seems, that in the first step the DELETE operation makes free some “space”, 
and the INSET operation in the 4. step can reuse this space. But if no records 
are deleted in the first step, the procedure is extremely slow.

 

To speed up the first run we found the following workaround:

We inserted dummy records into the Target tables with the proper log_id, and 
really the first run became very fast again.

 

Is there any “normal” way to speed up this procedure?

In the production environment there will be only “first runs”, the same log_id 
will never be used again.

 

 

thank

Zoltán

 

 




 

-- 

Milos Babic

http://www.linkedin.com/in/milosbabic




 

-- 

Milos Babic

http://www.linkedin.com/in/milosbabic



INSERTS waiting with wait_event is "transactionid"

2021-04-08 Thread Nagaraj Raj
Hi,
We are trying to load data around 1Bil records into one table with INSERT 
statements (not able to use COPY command) and they are been waiting for a lock 
and the wait_event is "transactionid", I didn't find any information in the 
documents. Queries have been waiting for hours.
Table DDL'sCREATE TABLE test_load(    billg_acct_cid_hash character varying(50) 
COLLATE pg_catalog."default" NOT NULL,    accs_mthd_cid_hash character 
varying(50) COLLATE pg_catalog."default" NOT NULL,    soc character varying(10) 
COLLATE pg_catalog."default" NOT NULL,    soc_desc character varying(100) 
COLLATE pg_catalog."default",    service_type_cd character varying(10) COLLATE 
pg_catalog."default",    soc_start_dt date,    soc_end_dt date,    
product_eff_dt date,    product_exp_dt date,    curr_ind character varying(1) 
COLLATE pg_catalog."default",    load_dttm timestamp without time zone NOT 
NULL,    updt_dttm timestamp without time zone,    md5_chk_sum character 
varying(100) COLLATE pg_catalog."default",    deld_from_src_ind character(1) 
COLLATE pg_catalog."default",    orphan_ind character(1) COLLATE 
pg_catalog."default",    CONSTRAINT test_load_pk PRIMARY KEY 
(billg_acct_cid_hash, accs_mthd_cid_hash, soc));
query results from pg_locks ;
 SELECT COALESCE(blockingl.relation::regclass::text, blockingl.locktype) AS 
locked_item,    now() - blockeda.query_start AS waiting_duration,    
blockeda.pid AS blocked_pid,    left(blockeda.query,7) AS blocked_query,    
blockedl.mode AS blocked_mode,    blockinga.pid AS blocking_pid,    
left(blockinga.query,7) AS blocking_query,    blockingl.mode AS blocking_mode   
FROM pg_locks blockedl     JOIN pg_stat_activity blockeda ON blockedl.pid = 
blockeda.pid     JOIN pg_locks blockingl ON (blockingl.transactionid = 
blockedl.transactionid OR blockingl.relation = blockedl.relation AND 
blockingl.locktype = blockedl.locktype) AND blockedl.pid <> blockingl.pid     
JOIN pg_stat_activity blockinga ON blockingl.pid = blockinga.pid AND 
blockinga.datid = blockeda.datid  WHERE NOT blockedl.granted   order by 
blockeda.query_start
"transactionid" "18:20:06.068154" 681216 "INSERT " "ShareLock" 679840 "INSERT " 
"ExclusiveLock""transactionid" "18:19:05.504781" 679688 "INSERT " "ShareLock" 
679856 "INSERT " "ExclusiveLock""transactionid" "18:18:17.30099" 679572 "INSERT 
" "ShareLock" 679612 "INSERT " "ShareLock""transactionid" "18:18:17.30099" 
679572 "INSERT " "ShareLock" 679580 "INSERT " "ShareLock""transactionid" 
"18:18:17.30099" 679572 "INSERT " "ShareLock" 681108 "INSERT " 
"ExclusiveLock""transactionid" "18:14:17.969603" 681080 "INSERT " "ShareLock" 
681204 "INSERT " "ExclusiveLock""transactionid" "18:13:41.531575" 681112 
"INSERT " "ShareLock" 679636 "INSERT " "ExclusiveLock""transactionid" 
"18:04:16.195069" 679556 "INSERT " "ShareLock" 679776 "INSERT " 
"ExclusiveLock""transactionid" "17:58:54.284211" 679696 "INSERT " "ShareLock" 
678940 "INSERT " "ExclusiveLock""transactionid" "17:57:54.220879" 681144 
"INSERT " "ShareLock" 679792 "INSERT " "ExclusiveLock""transactionid" 
"17:57:28.736147" 679932 "INSERT " "ShareLock" 679696 "INSERT " 
"ExclusiveLock""transactionid" "17:53:48.701858" 679580 "INSERT " "ShareLock" 
679572 "INSERT " "ShareLock"

query results from pg_stat_activity  ;

SELECT pg_stat_activity.pid,    pg_stat_activity.usename, 
pg_stat_activity.state,    now() - pg_stat_activity.query_start AS runing_time, 
   LEFT(pg_stat_activity.query,7) ,    pg_stat_activity.wait_event   FROM 
pg_stat_activity  ORDER BY (now() - pg_stat_activity.query_start) DESC;
| 
 |  |  |  |  |  |
| 681216 | postgres | active | 07:32.7 | INSERT  | transactionid |
| 679688 | postgres | active | 06:32.2 | INSERT  | transactionid |
| 679572 | postgres | active | 05:44.0 | INSERT  | transactionid |
| 681080 | postgres | active | 01:44.6 | INSERT  | transactionid |
| 681112 | postgres | active | 01:08.2 | INSERT  | transactionid |
| 679556 | postgres | active | 51:42.9 | INSERT  | transactionid |
| 679696 | postgres | active | 46:20.9 | INSERT  | transactionid |
| 681144 | postgres | active | 45:20.9 | INSERT  | transactionid |
| 679932 | postgres | active | 44:55.4 | INSERT  | transactionid |
| 679580 | postgres | active | 41:15.4 | INSERT  | transactionid |
| 679400 | postgres | active | 39:51.2 | INSERT  | transactionid |
| 679852 | postgres | active | 37:05.3 | INSERT  | transactionid |
| 681188 | postgres | active | 36:23.2 | INSERT  | transactionid |
| 679544 | postgres | active | 35:33.4 | INSERT  | transactionid |
| 675460 | postgres | active | 26:06.8 | INSERT  | transactionid |



select version ();
PostgreSQL 11.6, compiled by Visual C++ build 1800, 64-bit

CPU: v32RAM: 320 GBshared_buffers = 64GB
effective_cache_size = 160 GB

any comments on the issue?

Thanks,Rj