Re: Max_connections limit

2019-06-26 Thread HER
You now that Postgres don’t have any shared_pool as Oracle, and the  session 
information ( execution plan, etc..) are only available for the current 
session. Therefore I also highly recommend to us a connection poll as Laurent 
wrote, in order to have higher chance that some stuff is already cached in the 
shared session available. 

Regards
Herve 



Envoyé de mon iPhone

> Le 26 juin 2019 à 11:05, Laurenz Albe  a écrit :
> 
> Daulat Ram wrote:
>> We have migrated our database  from Oracle 12c to Postgres 11. I need your 
>> suggestions ,
>> we have sessions limit in Oracle = 3024 . Do we need to set the same 
>> connection limit
>> in Postgres as well. How we can decide the max_connections limit for 
>> postgres.
>> Are there any differences in managing connections in Oracle and postgres.
> 
> I'd say that is way too high in both Oracle and PostgreSQL.
> 
> Set the value to 50 or 100 and get a connection pooler if the
> application cannot do that itself.
> 
> Yours,
> Laurenz Albe
> -- 
> Cybertec | https://www.cybertec-postgresql.com
> 
> 
> 


Re: procedure using CURSOR to insert is extremely slow

2021-04-08 Thread HER
If you do a delete on the first step without any statistics, you request will 
do a full scan of the table, which will be slower.

Did you check the different execution plans ?



From: Szalontai Zoltán 
Sent: Thursday, April 8, 2021 01:24 PM
To: [email protected] 

Subject: procedure using CURSOR to insert is extremely slow


Hi,



We have a Class db.t2.medium database on AWS.

We use a procedure to transfer data records from the Source to the Target 
Schema.

Transfers are identified by the log_id field in the target table.



The procedure is:

1 all records are deleted from the Target table with the actual log_id value

2 a complicated SELECT (numerous tables are joined) is created on the Source 
system

3 a cursor is defined based on this SELECT

4 we go trough the CURSOR and insert new records into the Target table with 
this log_id



(Actually we have about 100 tables in the Target schema and the size of the 
database backup file is about 1GByte. But we do the same for all the Target 
tables.)



Our procedure is extremely slow for the first run: 3 days for the 100 tables. 
For the second and all subsequent run it is fast enough (15 minutes).

The only difference between the first run and all the others is that in the 
first run there are no records in the Target schema with this log_id.



It seems, that in the first step the DELETE operation makes free some “space”, 
and the INSET operation in the 4. step can reuse this space. But if no records 
are deleted in the first step, the procedure is extremely slow.



To speed up the first run we found the following workaround:

We inserted dummy records into the Target tables with the proper log_id, and 
really the first run became very fast again.



Is there any “normal” way to speed up this procedure?

In the production environment there will be only “first runs”, the same log_id 
will never be used again.





thank

Zoltán