Postgresql database terminates abruptly with too many open files error

2025-01-14 Thread Sri Mrudula Attili

Hello Team,

 We have a postgresql VDB(virtual database- Delphix) that keeps 
terminating due "to too many open files".


Below are few alerts that we could see from the postgresql.log


< 2025-01-14 11:37:20.724 GMT >LOG:  out of file descriptors: Too many 
open files in system; release and retry


< 2025-01-14 11:37:20.724 GMT >FATAL:  epoll_create1 failed: Too many 
open files in system



The number of allowed openfiles at OS level are 65000. even then we see 
these all these being utilised and causing the database to terminate.



Is there a way we could avoid this.


Thanks,

Sri





Re: About PostgreSQL Query Plan

2025-01-14 Thread David Rowley
On Tue, 14 Jan 2025 at 03:45, Eşref Halıcıoğlu
 wrote:
> I do not fully understand the logic of this issue. I would be very grateful 
> if you can share information on the subject.
>
> The query plan is as follows.
>
> Update on "test_table1" tt1  (cost=0.13..159112.84 rows=0 width=0)
>   Update on "test_table1_partition_2020_10" tt1
>   Update on "test_table1_partition_2020_11" tt1
...
 Update on "test_table1_partition_2025_12" tt1
>   Update on "test_table1_partition_default" tt1
>   ->  Nested Loop  (cost=0.13..159112.84 rows=1 width=53)
> ->  Seq Scan on "temp_test_table1" temp  (cost=0.00..19.20 rows=920 
> width=31)
> ->  Append  (cost=0.13..172.29 rows=64 width=38)
>   Subplans Removed: 60

The partitions mentioned in the "Update on" portion of the EXPLAIN
aren't being scanned. These are just result relations that potentially
could have tuples routed to them.  The key part of the EXPLAIN output
to knowing that the unrelated partitions are pruned is from which
partitions are mentioned below the "Append" node. You can see that 60
of your 64 partitions were pruned with the "Subplans Removed: 60"
part. The executor is only going to scan the 4 remaining ones that you
see below the "Append".

I wouldn't worry too much about the additional partitions mentioned in
the "Update on". We maybe could do a bit more work to initialise those
more lazily as we do for INSERT statements, but I'd be surprised if it
was a problem for 64 partitions, especially so for an update statement
that might be touching 3 months of data. Nothing about these existing
in the "Update on" portion of the EXPLAIN output means that that
partition will be scanned by the UPDATE statement, rest assured.

David




Re: pgAgent error on Installation

2025-01-14 Thread Daniel Gustafsson
> On 13 Jan 2025, at 20:11, Fakarai, Edgar  wrote:

> I am encountering a Certification verification error whenever I want to 
> install pgAgent from stackbuilder on Windows OS. I have attached the 
> screenshot of the error message.
> Thank you. If I take yes it gives me another error check the second 
> screenshot.

This is likely a network issue on your end, possibly a 6-second timeout or a
traffic inspecting middlebox somewhere?  The document can be accessed and there
is nothing wrong with the certificate (I just had a look).  Try to access the
URL from your browser and see if that works.

--
Daniel Gustafsson





Re: About PostgreSQL Query Plan

2025-01-14 Thread Eşref Halıcıoğlu
Hello, Okay, thank you very much, Mr. David, for your support and the information, Eşref To: Eşref Halıcıoğlu (esref.halicio...@primeit.com.tr);Cc: pgsql-general@lists.postgresql.org;Subject: About PostgreSQL Query Plan;14.01.2025, 13:09, "David Rowley" :On Tue, 14 Jan 2025 at 03:45, Eşref Halıcıoğlu wrote: I do not fully understand the logic of this issue. I would be very grateful if you can share information on the subject. The query plan is as follows. Update on "test_table1" tt1 (cost=0.13..159112.84 rows=0 width=0)   Update on "test_table1_partition_2020_10" tt1   Update on "test_table1_partition_2020_11" tt1... Update on "test_table1_partition_2025_12" tt1   Update on "test_table1_partition_default" tt1   -> Nested Loop (cost=0.13..159112.84 rows=1 width=53) -> Seq Scan on "temp_test_table1" temp (cost=0.00..19.20 rows=920 width=31) -> Append (cost=0.13..172.29 rows=64 width=38)   Subplans Removed: 60The partitions mentioned in the "Update on" portion of the EXPLAINaren't being scanned. These are just result relations that potentiallycould have tuples routed to them. The key part of the EXPLAIN outputto knowing that the unrelated partitions are pruned is from whichpartitions are mentioned below the "Append" node. You can see that 60of your 64 partitions were pruned with the "Subplans Removed: 60"part. The executor is only going to scan the 4 remaining ones that yousee below the "Append".I wouldn't worry too much about the additional partitions mentioned inthe "Update on". We maybe could do a bit more work to initialise thosemore lazily as we do for INSERT statements, but I'd be surprised if itwas a problem for 64 partitions, especially so for an update statementthat might be touching 3 months of data. Nothing about these existingin the "Update on" portion of the EXPLAIN output means that thatpartition will be scanned by the UPDATE statement, rest assured.David  --  

Re: Postgresql database terminates abruptly with too many open files error

2025-01-14 Thread Frank Lanitz

Hello,


Have you checked something like lsof to see open file descriptors to see?

Cheers,
frank

Am 14.01.25 um 13:58 schrieb Sri Mrudula Attili:

Hello Team,

  We have a postgresql VDB(virtual database- Delphix) that keeps terminating due "to 
too many open files".

Below are few alerts that we could see from the postgresql.log


< 2025-01-14 11:37:20.724 GMT >LOG:  out of file descriptors: Too many open 
files in system; release and retry

< 2025-01-14 11:37:20.724 GMT >FATAL:  epoll_create1 failed: Too many open 
files in system


The number of allowed openfiles at OS level are 65000. even then we see these 
all these being utilised and causing the database to terminate.


Is there a way we could avoid this.


Thanks,

Sri









Re: Postgresql database terminates abruptly with too many open files error

2025-01-14 Thread Ron Johnson
On Tue, Jan 14, 2025 at 7:58 AM Sri Mrudula Attili  wrote:

> Hello Team,
>
>   We have a postgresql VDB(virtual database- Delphix) that keeps
> terminating due "to too many open files".
>
> Below are few alerts that we could see from the postgresql.log
>
>
> < 2025-01-14 11:37:20.724 GMT >LOG:  out of file descriptors: Too many
> open files in system; release and retry
>
> < 2025-01-14 11:37:20.724 GMT >FATAL:  epoll_create1 failed: Too many
> open files in system
>
>
> The number of allowed openfiles at OS level are 65000. even then we see
> these all these being utilised and causing the database to terminate.
>
>
> Is there a way we could avoid this.
>

We need more information.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Postgresql database terminates abruptly with too many open files error

2025-01-14 Thread Tom Lane
Sri Mrudula Attili  writes:
>   We have a postgresql VDB(virtual database- Delphix) that keeps 
> terminating due "to too many open files".

What do you have max_connections set to, and how many actually-live
server processes are there typically?

> The number of allowed openfiles at OS level are 65000.

I'm suspecting that you either need to increase that, decrease
max_files_per_process (which defaults to 1000), or decrease
max_connections.

regards, tom lane




Re: Intermittent errors when fetching cursor rows on PostgreSQL 16

2025-01-14 Thread Adrian Klaver




On 1/13/25 11:56 PM, Enrico Schenone wrote:

Il 13/01/25 18:26, Adrian Klaver ha scritto:





In your OP you stated:

"Production environments can be:

 * Distinct application server and DB server on distinct subnets (no
    dropped packet detected on firewall, no memory/disk/network failure
    detected by "nmon" tool)
  * Distinct application server and DB server on same subnet (no 
firewall)

  * Same server for PostgreSQL and applications
"

In all those cases are the various servers all running completely 
within the providers infrastructure?


No, the second production environment is On Premises at the customer 
Datacenter under the same vmware hypervisor.en


Which environment did you run the recent prolonged test against?

I'll make more investigations on second and third environments by 
increasing the verbosity of both DB and Application logs.


*Enrico Schenone*
Software Architect



--
Adrian Klaver
adrian.kla...@aklaver.com




Fwd: Lock in postgreSQL

2025-01-14 Thread Igor Korot
Hi, ALL,

This concern both libpq and DBC API.

Hopefully I can get answers on both.

Than yu.


-- Forwarded message -
From: Igor Korot 
Date: Tue, Jan 14, 2025 at 12:50 AM
Subject: Lock in postgreSQL
To: pgsql-generallists.postgresql.org
, PostgreSQL ODBC list



Hi, ALL,
https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS

There is an explanation about a promissory
lock functions().

I have 2 questions about that:
1. Is calling one of them a way to get the lock()?
2. If I work on ODBC-based connection - I should call SQLExectDirect()
with the function in question, correct?

Thank you.




Re: Fwd: Lock in postgreSQL

2025-01-14 Thread Adrian Klaver




On 1/14/25 10:42 AM, Igor Korot wrote:

Hi, ALL,

This concern both libpq and DBC API.

Hopefully I can get answers on both.


Read:

https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS



Than yu.


-- Forwarded message -
From: Igor Korot 
Date: Tue, Jan 14, 2025 at 12:50 AM
Subject: Lock in postgreSQL
To: pgsql-generallists.postgresql.org
, PostgreSQL ODBC list



Hi, ALL,
https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS

There is an explanation about a promissory
lock functions().

I have 2 questions about that:
1. Is calling one of them a way to get the lock()?
2. If I work on ODBC-based connection - I should call SQLExectDirect()
with the function in question, correct?

Thank you.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Intermittent errors when fetching cursor rows on PostgreSQL 16

2025-01-14 Thread Enrico Schenone

Il 14/01/25 18:56, Adrian Klaver ha scritto:



On 1/13/25 11:56 PM, Enrico Schenone wrote:

Il 13/01/25 18:26, Adrian Klaver ha scritto:





In your OP you stated:

"Production environments can be:

 * Distinct application server and DB server on distinct subnets (no
    dropped packet detected on firewall, no memory/disk/network failure
    detected by "nmon" tool)
  * Distinct application server and DB server on same subnet (no 
firewall)

  * Same server for PostgreSQL and applications
"

In all those cases are the various servers all running completely 
within the providers infrastructure?


No, the second production environment is On Premises at the customer 
Datacenter under the same vmware hypervisor.en


Which environment did you run the recent prolonged test against?
The first one: Distinct application server and DB server on distinct 
subnets.


I'll make more investigations on second and third environments by 
increasing the verbosity of both DB and Application logs.


*Enrico Schenone*
Software Architect




*Enrico Schenone*
Software Architect