Re: Userland copy of pg_statistic - is there a solution?

2024-09-30 Thread Laurenz Albe
On Mon, 2024-09-30 at 08:31 +0100, Vinícius Abrahão wrote:
> postgres=# create table backup_pg_statistic as select * from pg_statistic;
> 2024-09-30 08:25:56 BST [7400]: 
> user=vinnix,db=postgres,app=psql,client=[local] ERROR:  column "stavalues1" 
> has pseudo-type anyarray
> 2024-09-30 08:25:56 BST [7400]: 
> user=vinnix,db=postgres,app=psql,client=[local] STATEMENT:  create table 
> backup_pg_statistic as select * from pg_statistic;
> ERROR:  column "stavalues1" has pseudo-type anyarray
> Time: 9.544 ms
> postgres=# create table test_array(a anyarray);
> 2024-09-30 08:26:40 BST [7400]: 
> user=vinnix,db=postgres,app=psql,client=[local] ERROR:  column "a" has 
> pseudo-type anyarray
> 2024-09-30 08:26:40 BST [7400]: 
> user=vinnix,db=postgres,app=psql,client=[local] STATEMENT:  create table 
> test_array(a anyarray);
> ERROR:  column "a" has pseudo-type anyarray
> Time: 9.137 ms
> 
> 
> Is there a solution for this simple problem?

Use a text[] for your copy of the stavalues* columns.

Yours,
Laurenz Albe




Trouble in generating the plpgsql procedure code

2024-09-30 Thread Nikhil Ingale
Guys,
I have a requirement to generate the plpgsql procedure code itself by
reading the postgresql metadata or system tables. I don't want to write a
plpgsql procedure every time for every table because I have many tables in
db. I want to generate the procedure for more than 100 tables and write all
the procedure code to a single file which can be used to execute to create
the actual procedure.

While I generate the procedure code I want the code to be added to the next
line in a better readability format. I'm writing the generated code for all
tables in single shot with the help of COPY to PROG

My procedure code is mentioned below. But the generated code is having \n
instead of adding the code to the next line.


CREATE OR REPLACE FUNCTION emp_dts_iot () RETURNS TRIGGER AS
$BODY$\nDECLARE\n  nCount INT;\nBEGIN\n   IF TG_OP IN ('INSERT') THEN\n
SELECT count(1) INTO nCount FROM employee \n WHERE id =
:NEW.id AND STATUS_FLAG = 9 ; \n  IF COALESCE(nCount,0) > 0
THEN\n UPDATE test.employee  \nid =
:NEW.\n   ,description = :NEW.description\n
  ,state_flag = 2\n WHERE id = :NEW.id ; \n
 ELSE\n\t INSERT INTO employee VALUES ( \n
:NEW.id\n   ,:NEW.description\n,1 ) ; \n
 END IF;\n\n   ELSIF TG_OP IN ('UPDATE') THEN\n SELECT count(1)
INTO nCount FROM test.employee\n

The code should have been written in the next line instead of \n.


drop procedure insteadoftriggers;
create or replace procedure insteadoftriggers( IN e_owner TEXT,
p_table_name TEXT DEFAULT NULL, emp_owner TEXT DEFAULT 'test')
AS $$
DECLARE
TstTableCursor CURSOR IS SELECT table_name,test_table_name FROM
app_tables WHERE TEST_TABLE_NAME IS NOT NULL AND table_name =
COALESCE(p_table_name,table_name) AND owner= COALESCE(e_owner,owner) ;

l_cnt NUMERIC := 0;
tab TEXT := E'\t';
l_col_str TEXT;
l_table_Name TEXT ;
cKeyCol TEXT DEFAULT '' ;
cWhere TEXT DEFAULT '' ;
trigger_body TEXT ;
cSpace character varying(100) := '   ';
BEGIN
FOR TstTableRec IN TstTableCursor
LOOP
l_table_name := TstTableRec.TABLE_NAME ;
trigger_body :=
'CREATE OR REPLACE FUNCTION prod.' || TstTableRec.TABLE_NAME || '_IOT()
RETURNS TRIGGER AS $BODY$' || CHR(10) ||
'DECLARE' || CHR(10) ||
'  ' || 'nCount INT;' || chr(10) ||
'BEGIN' || chr(10) ||
'   ' || 'IF TG_OP IN ' || '(' ||'INSERT'||') THEN' || chr(10) ||
' SELECT count(1) INTO nCount FROM test.' || TstTableRec.TABLE_NAME
|| chr(10) ||
  GetTrigClause(p_tdaowner, lower(TstTableRec.TABLE_NAME), ':NEW.')
|| ' AND STATUS_FLAG = 9 ; ' || chr(10) ||

 '  '|| 'IF COALESCE(nCount,0) > 0 THEN' || chr(10) ||

Can someone help me here?

Regards,
Nikhil Ingale


Re: Trouble in generating the plpgsql procedure code

2024-09-30 Thread Pavel Stehule
Hi

use format function

do $$
begin
  for i in 1..10
  loop
execute format($_$
create or replace function %I(a int)
returns int as $__$
begin
  return a + %s;
end;
$__$ language plpgsql;
$_$, 'foo_' || i, i);
  end loop;
end;
$$;
DO

(2024-09-30 12:21:29) postgres=# \sf foo_1
CREATE OR REPLACE FUNCTION public.foo_1(a integer)
 RETURNS integer
 LANGUAGE plpgsql
AS $function$
begin
  return a + 1;
end;
$function$

Regards

Pavel

po 30. 9. 2024 v 12:09 odesílatel Nikhil Ingale  napsal:

> Guys,
> I have a requirement to generate the plpgsql procedure code itself by
> reading the postgresql metadata or system tables. I don't want to write a
> plpgsql procedure every time for every table because I have many tables in
> db. I want to generate the procedure for more than 100 tables and write all
> the procedure code to a single file which can be used to execute to create
> the actual procedure.
>
> While I generate the procedure code I want the code to be added to the
> next line in a better readability format. I'm writing the generated code
> for all tables in single shot with the help of COPY to PROG
>
> My procedure code is mentioned below. But the generated code is having \n
> instead of adding the code to the next line.
>
>
> CREATE OR REPLACE FUNCTION emp_dts_iot () RETURNS TRIGGER AS
> $BODY$\nDECLARE\n  nCount INT;\nBEGIN\n   IF TG_OP IN ('INSERT') THEN\n
> SELECT count(1) INTO nCount FROM employee \n WHERE id =
> :NEW.id AND STATUS_FLAG = 9 ; \n  IF COALESCE(nCount,0) > 0
> THEN\n UPDATE test.employee  \nid =
> :NEW.\n   ,description = :NEW.description\n
>   ,state_flag = 2\n WHERE id = :NEW.id ; \n
>  ELSE\n\t INSERT INTO employee VALUES ( \n
> :NEW.id\n   ,:NEW.description\n,1 ) ; \n
>  END IF;\n\n   ELSIF TG_OP IN ('UPDATE') THEN\n SELECT count(1)
> INTO nCount FROM test.employee\n
>
> The code should have been written in the next line instead of \n.
>
>
> drop procedure insteadoftriggers;
> create or replace procedure insteadoftriggers( IN e_owner TEXT,
> p_table_name TEXT DEFAULT NULL, emp_owner TEXT DEFAULT 'test')
> AS $$
> DECLARE
> TstTableCursor CURSOR IS SELECT table_name,test_table_name FROM
> app_tables WHERE TEST_TABLE_NAME IS NOT NULL AND table_name =
> COALESCE(p_table_name,table_name) AND owner= COALESCE(e_owner,owner) ;
>
> l_cnt NUMERIC := 0;
> tab TEXT := E'\t';
> l_col_str TEXT;
> l_table_Name TEXT ;
> cKeyCol TEXT DEFAULT '' ;
> cWhere TEXT DEFAULT '' ;
> trigger_body TEXT ;
> cSpace character varying(100) := '   ';
> BEGIN
> FOR TstTableRec IN TstTableCursor
> LOOP
> l_table_name := TstTableRec.TABLE_NAME ;
> trigger_body :=
> 'CREATE OR REPLACE FUNCTION prod.' || TstTableRec.TABLE_NAME || '_IOT()
> RETURNS TRIGGER AS $BODY$' || CHR(10) ||
> 'DECLARE' || CHR(10) ||
> '  ' || 'nCount INT;' || chr(10) ||
> 'BEGIN' || chr(10) ||
> '   ' || 'IF TG_OP IN ' || '(' ||'INSERT'||') THEN' || chr(10) ||
> ' SELECT count(1) INTO nCount FROM test.' ||
> TstTableRec.TABLE_NAME || chr(10) ||
>   GetTrigClause(p_tdaowner, lower(TstTableRec.TABLE_NAME),
> ':NEW.') || ' AND STATUS_FLAG = 9 ; ' || chr(10) ||
>
>  '  '|| 'IF COALESCE(nCount,0) > 0 THEN' || chr(10) ||
>
> Can someone help me here?
>
> Regards,
> Nikhil Ingale
>


Re: Failing GSSAPI TCP when connecting to server

2024-09-30 Thread Peter
Hello Folks,

  Thanks for Your inspiration; and I made some progress (found
a way to avoid the issue).

The issue is most likely not related to postgres.

Ron Johnson said:

>> A configuration problem on the machine(s) can be ruled out,
> Famous last words.

Trust me. :)

> Is there a way to test pmc authentication via some other tool, like psql?

Sure, that works. The problem is contained inside the running
application program(s), everything else doesn't show it.

> If *only *the application changed, then by definition it can't be a
> database problem.  *Something* in the application changed; you just haven't
> found it.

Obviousely, yes. But then, such a change might expose an undesired
behaviour elsewhere.

> Specifically, I'd read the Discourse 2.3.0 and 2.3.1 release notes.

Correction: it is actually 3.2.0 and 3.3.1.

I finally went the way of bisecting, and, it's not really a problem in
Discourse either. It comes from a feature I had enabled in the course
of migrating, a filesystem change monitor based on kqueue:
   https://man.freebsd.org/cgi/man.cgi?query=kqueue
Removing that feature solves the issue for now.

I have still no idea how that tool might lead to mishandled sockets
elsewhere; it might somehow have to do with the async processing of
the DB connect. That would need a thorough look into the code where
this is done.

Tom Lane wrote:

>The TCP trace looks like the client side is timing out too quickly
>in the unsuccessful case. It's not clear to me how the different
>Discourse version would lead to the Kerberos library applying a
>different timeout.

It's not a timeout; a timeout would close the socket. It seems to
rather forget the socket.

>Still, it seems like most of the moving parts
>here are outside of Postgres' control --- I don't think that libpq
>itself has much involvement in the KDC communication.

Kerberos is weird. It goes into libgssapi, but libgssapi doesn't
do much on it's own, it just maps so-called "mech"s, which then point
to the actual kerberos code - which in the case of FreeBSD is very
ancient (but work should be underway to modernize it). It's one of
the most creepy pieces of code I've looked into.

> I concur with looking at the Discourse release notes and maybe asking
> some questions in that community.

They only support that app to run in a certain containerization
on a specific brand of Linux. They don't like my questions and
might just delete them.

Anyway, I have a lead now to either avoid the problem or where to
look more closely. And it has not directly to do with postgres, but
rather with genuine socket mishandling and/or maybe some flaw in
FreeBSD.

cheers,
PMc




Userland copy of pg_statistic - is there a solution?

2024-09-30 Thread Vinícius Abrahão
Morning,

postgres=# create table backup_pg_statistic as select * from pg_statistic;
2024-09-30 08:25:56 BST [7400]:
user=vinnix,db=postgres,app=psql,client=[local] ERROR:  column "stavalues1"
has pseudo-type anyarray
2024-09-30 08:25:56 BST [7400]:
user=vinnix,db=postgres,app=psql,client=[local] STATEMENT:  create table
backup_pg_statistic as select * from pg_statistic;
ERROR:  column "stavalues1" has pseudo-type anyarray
Time: 9.544 ms
postgres=# create table test_array(a anyarray);
2024-09-30 08:26:40 BST [7400]:
user=vinnix,db=postgres,app=psql,client=[local] ERROR:  column "a" has
pseudo-type anyarray
2024-09-30 08:26:40 BST [7400]:
user=vinnix,db=postgres,app=psql,client=[local] STATEMENT:  create table
test_array(a anyarray);
ERROR:  column "a" has pseudo-type anyarray
Time: 9.137 ms
Is there a solution for this simple problem?

Cheers,
Vinícius


Re: Trouble in generating the plpgsql procedure code

2024-09-30 Thread Nikhil Ingale
Let me try, Thanks Pavel.

On Mon, Sep 30, 2024 at 3:53 PM Pavel Stehule 
wrote:

> Hi
>
> use format function
>
> do $$
> begin
>   for i in 1..10
>   loop
> execute format($_$
> create or replace function %I(a int)
> returns int as $__$
> begin
>   return a + %s;
> end;
> $__$ language plpgsql;
> $_$, 'foo_' || i, i);
>   end loop;
> end;
> $$;
> DO
>
> (2024-09-30 12:21:29) postgres=# \sf foo_1
> CREATE OR REPLACE FUNCTION public.foo_1(a integer)
>  RETURNS integer
>  LANGUAGE plpgsql
> AS $function$
> begin
>   return a + 1;
> end;
> $function$
>
> Regards
>
> Pavel
>
> po 30. 9. 2024 v 12:09 odesílatel Nikhil Ingale 
> napsal:
>
>> Guys,
>> I have a requirement to generate the plpgsql procedure code itself by
>> reading the postgresql metadata or system tables. I don't want to write a
>> plpgsql procedure every time for every table because I have many tables in
>> db. I want to generate the procedure for more than 100 tables and write all
>> the procedure code to a single file which can be used to execute to create
>> the actual procedure.
>>
>> While I generate the procedure code I want the code to be added to the
>> next line in a better readability format. I'm writing the generated code
>> for all tables in single shot with the help of COPY to PROG
>>
>> My procedure code is mentioned below. But the generated code is having \n
>> instead of adding the code to the next line.
>>
>>
>> CREATE OR REPLACE FUNCTION emp_dts_iot () RETURNS TRIGGER AS
>> $BODY$\nDECLARE\n  nCount INT;\nBEGIN\n   IF TG_OP IN ('INSERT') THEN\n
>> SELECT count(1) INTO nCount FROM employee \n WHERE id =
>> :NEW.id AND STATUS_FLAG = 9 ; \n  IF COALESCE(nCount,0) > 0
>> THEN\n UPDATE test.employee  \nid =
>> :NEW.\n   ,description = :NEW.description\n
>>   ,state_flag = 2\n WHERE id = :NEW.id ; \n
>>  ELSE\n\t INSERT INTO employee VALUES ( \n
>> :NEW.id\n   ,:NEW.description\n,1 ) ; \n
>>  END IF;\n\n   ELSIF TG_OP IN ('UPDATE') THEN\n SELECT count(1)
>> INTO nCount FROM test.employee\n
>>
>> The code should have been written in the next line instead of \n.
>>
>>
>> drop procedure insteadoftriggers;
>> create or replace procedure insteadoftriggers( IN e_owner TEXT,
>> p_table_name TEXT DEFAULT NULL, emp_owner TEXT DEFAULT 'test')
>> AS $$
>> DECLARE
>> TstTableCursor CURSOR IS SELECT table_name,test_table_name FROM
>> app_tables WHERE TEST_TABLE_NAME IS NOT NULL AND table_name =
>> COALESCE(p_table_name,table_name) AND owner= COALESCE(e_owner,owner) ;
>>
>> l_cnt NUMERIC := 0;
>> tab TEXT := E'\t';
>> l_col_str TEXT;
>> l_table_Name TEXT ;
>> cKeyCol TEXT DEFAULT '' ;
>> cWhere TEXT DEFAULT '' ;
>> trigger_body TEXT ;
>> cSpace character varying(100) := '   ';
>> BEGIN
>> FOR TstTableRec IN TstTableCursor
>> LOOP
>> l_table_name := TstTableRec.TABLE_NAME ;
>> trigger_body :=
>> 'CREATE OR REPLACE FUNCTION prod.' || TstTableRec.TABLE_NAME || '_IOT()
>> RETURNS TRIGGER AS $BODY$' || CHR(10) ||
>> 'DECLARE' || CHR(10) ||
>> '  ' || 'nCount INT;' || chr(10) ||
>> 'BEGIN' || chr(10) ||
>> '   ' || 'IF TG_OP IN ' || '(' ||'INSERT'||') THEN' || chr(10) ||
>> ' SELECT count(1) INTO nCount FROM test.' ||
>> TstTableRec.TABLE_NAME || chr(10) ||
>>   GetTrigClause(p_tdaowner, lower(TstTableRec.TABLE_NAME),
>> ':NEW.') || ' AND STATUS_FLAG = 9 ; ' || chr(10) ||
>>
>>  '  '|| 'IF COALESCE(nCount,0) > 0 THEN' || chr(10) ||
>>
>> Can someone help me here?
>>
>> Regards,
>> Nikhil Ingale
>>
>


Re: Suggestion for memory parameters

2024-09-30 Thread Philip Semanchuk



> On Sep 26, 2024, at 7:03 AM, yudhi s  wrote:
> 
> In a RDS postgres ...

>  Is it fine to let it use "FreeLocalStorage" unless it goes till zero?

Hi Yudhi,
FreeLocalStorage and some of the other things you ask about are specific to AWS 
RDS, so you might have better luck getting answers on an RDS-specific mailing 
list. We also use RDS-hosted Postgres and so I completely understand how 
Postgres and RDS are intertwined.

We have had runaway queries exhaust FreeLocalStorage. It has been quite a while 
since that happened, so my memories are hazy, but I’m pretty sure that when we 
used all of FreeLocalStorage, the result was that Postgres restarted. It might 
be equivalent to using all memory and disk space on a standalone system. Once 
there’s no storage left, behavior is unpredictable but we can’t be surprised if 
things crash. Usually our runaway queries got killed before FreeLocalStorage 
filled up, but not always.

I second Veem’s suggestion to set work_mem on a per-session basis. Also note 
that the doc for work_mem says, “the total memory used could be many times the 
value of work_mem; it is necessary to keep this fact in mind when choosing the 
value."

https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM

Cheers
Philip



Re: Question on session timeout

2024-09-30 Thread sud
On Tue, Oct 1, 2024 at 4:10 AM Adrian Klaver 
wrote:

> On 9/30/24 13:01, sud wrote:
> > Hello,
> > We are frequently seeing the total DB connection reaching ~2000+ whereas
>
> Where are you getting the ~2000  count from?
>
> > the total number of active sessions in pg_stat_activity staying <100 at
> > any point in time. And when we see the sessions from DB side they are
> > showing most of the sessions with state as 'idle' having
> > backend_start/xact_start showing date ~10days older. We do use
> > application level connection pooling, and we have ~120 sets as both the
>
> What do you mean by ~120 sets, in particular what is a set?
>
> > "max idle" and "max active" connection count and "maxage" as 7 days, so
> > does this suggest any issue at connection pool setup?
>
> Using what pooler?
>
> >
> > We do see keep alive queries in the DB (select 1), not sure if that is
> > making this scenario. When checking the
>
> How often do to keep alive queries run?
>
> > "idle_in_transaction_session_timeout" it is set as 24hours and
> > "idle_session_timeout" set as "0". So my question is , should we set the
> > parameter to a lesser value in DB cluster level like ~5minutes or so, so
> > as not to keep the idle sessions lying so long in the database and what
>
> '"idle_in_transaction_session_timeout" it is set as 24hours' is a foot
> gun as explained here:
>
> https://www.postgresql.org/docs/current/runtime-config-client.html
>
> idle_in_transaction_session_timeout (integer)
>
> [...]
>
> "This option can be used to ensure that idle sessions do not hold locks
> for an unreasonable amount of time. Even when no significant locks are
> held, an open transaction prevents vacuuming away recently-dead tuples
> that may be visible only to this transaction; so remaining idle for a
> long time can contribute to table bloat. See Section 24.1 for more
> details."
>
>
> With '"idle_session_timeout" set as "0"' a session without an open
> transaction is not going to timeout.
>
>
>

*Where are you getting the ~2000  count from?*
Seeing this in the "performance insights" dashboard and also its matching
when I query the count of sessions from pg_stat_activity.


*What do you mean by ~120 sets, in particular what is a set?*These are the
values set as mentioned in the properties file which the application team
uses for connection pooling.


*Using what pooler?*I need to check on this as Its Java application(jdbc
driver for connecting to DB), so I thought it must be using standard
connection pooling. Will double check.


*How often do to keep alive queries run?*Need to check. But I am not sure,
in general , if these "keep alive" queries are used for keeping a
transaction alive or a session alive?

As you described, a long time open transaction with a session state as
"idle" will be threatening as that will cause locking and "transaction ID
wrap around" issues to surface whereas having "idle sessions" of a closed
transaction may not cause any issue as they will do no harm. Does it mean
we can have any number of idle sessions or we should also have some non
zero "timeout" setup for the "ide_session_timeout" parameter too
(maybe ~1hr or so)?

Is it correct to assume the session in pg_stat_activity with very old
XACT_START are the one which are part of long running open transaction(i.e.
driven by idle_in_transaction_session_timeout) whereas the ones with older
BACKEND_START or QUERY_START are the one are just the idle session(driven
by idle_session_timeout) but not tied to any open transaction?

Few observations:-

I do see, "MaximumUsedTransactionIDs" staying consistently ~200M for a long
time then coming down. And its matching to "autovacuum_freeze_max_age"
which is set as 200M. Hope it's expected. We have max_connections set as
5000.

"Database connection" touching ~2000 then coming down till 200. And we see
uneven spikes in those, it seems to be matching with the pattern , when we
have some errors occurring during the insert queries which are submitted by
the Java application to insert the data into the tables.


Re: Suggestion for memory parameters

2024-09-30 Thread yudhi s
On Mon, Sep 30, 2024 at 8:46 PM Philip Semanchuk <
phi...@americanefficient.com> wrote:

>
>
> > On Sep 26, 2024, at 7:03 AM, yudhi s 
> wrote:
> >
> > In a RDS postgres ...
>
> >  Is it fine to let it use "FreeLocalStorage" unless it goes till zero?
>
> Hi Yudhi,
> FreeLocalStorage and some of the other things you ask about are specific
> to AWS RDS, so you might have better luck getting answers on an
> RDS-specific mailing list. We also use RDS-hosted Postgres and so I
> completely understand how Postgres and RDS are intertwined.
>
> We have had runaway queries exhaust FreeLocalStorage. It has been quite a
> while since that happened, so my memories are hazy, but I’m pretty sure
> that when we used all of FreeLocalStorage, the result was that Postgres
> restarted. It might be equivalent to using all memory and disk space on a
> standalone system. Once there’s no storage left, behavior is unpredictable
> but we can’t be surprised if things crash. Usually our runaway queries got
> killed before FreeLocalStorage filled up, but not always.
>
> I second Veem’s suggestion to set work_mem on a per-session basis. Also
> note that the doc for work_mem says, “the total memory used could be many
> times the value of work_mem; it is necessary to keep this fact in mind when
> choosing the value."
>
>
> https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM
>
>
Thank you.

When I execute the query with explain (analyze, buffers),I see the section
below in the plan having "sort method" information in three places
each showing ~75MB size, which if combined is coming <250MB. So , does that
mean it's enough to set the work_mem as ~250MB for these queries before
they start?

 But yes somehow this query is finished in a few seconds when i execute
using explain(analyze,buffers) while if i run it without using explain it
runs for ~10minutes+. My expectation was that doing (explain analyze)
should actually execute the query fully. Is my understanding correct here
and if the disk spilling stats which I am seeing is accurate enough to go
with?


Limit  (cost=557514.75..592517.20 rows=30 width=1430) (actual
time=2269.939..2541.527 rows=30 loops=1)
  Buffers: shared hit=886206, temp read=38263 written=56947
  I/O Timings: temp read=70.040 write=660.073
  ->  Gather Merge  (cost=557514.75..643393.02 rows=736048 width=1430)
(actual time=2269.938..2513.748 rows=30 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=886206, temp read=38263 written=56947
I/O Timings: temp read=70.040 write=660.073
->  Sort  (cost=556514.73..557434.79 rows=368024 width=1430)
(actual time=2227.392..2279.389 rows=100135 loops=3)
  Sort Key: column1, column2
  Sort Method: external merge  Disk: *77352kB*
  Buffers: shared hit=886206, temp read=38263 written=56947
  I/O Timings: temp read=70.040 write=660.073
  Worker 0:  Sort Method: external merge  Disk: *75592kB*
  Worker 1:  Sort Method: external merge  Disk: *74440kB*
  ->  Parallel Append  (cost=0.00..64614.94 rows=368024
width=1430) (actual time=0.406..570.105 rows=299204 loops=3)


Question on session timeout

2024-09-30 Thread sud
Hello,
We are frequently seeing the total DB connection reaching ~2000+ whereas
the total number of active sessions in pg_stat_activity staying <100 at any
point in time. And when we see the sessions from DB side they are showing
most of the sessions with state as 'idle' having backend_start/xact_start
showing date ~10days older. We do use application level connection pooling,
and we have ~120 sets as both the "max idle" and "max active" connection
count and "maxage" as 7 days, so does this suggest any issue at connection
pool setup?

We do see keep alive queries in the DB (select 1), not sure if that is
making this scenario. When checking the
"idle_in_transaction_session_timeout" it is set as 24hours and
"idle_session_timeout" set as "0". So my question is , should we set the
parameter to a lesser value in DB cluster level like ~5minutes or so, so as
not to keep the idle sessions lying so long in the database and what would
be the advisable value for these parameters?

Regards
Sud


Re: Question on session timeout

2024-09-30 Thread Adrian Klaver

On 9/30/24 13:01, sud wrote:

Hello,
We are frequently seeing the total DB connection reaching ~2000+ whereas 


Where are you getting the ~2000  count from?

the total number of active sessions in pg_stat_activity staying <100 at 
any point in time. And when we see the sessions from DB side they are 
showing most of the sessions with state as 'idle' having 
backend_start/xact_start showing date ~10days older. We do use 
application level connection pooling, and we have ~120 sets as both the 


What do you mean by ~120 sets, in particular what is a set?

"max idle" and "max active" connection count and "maxage" as 7 days, so 
does this suggest any issue at connection pool setup?


Using what pooler?



We do see keep alive queries in the DB (select 1), not sure if that is 
making this scenario. When checking the 


How often do to keep alive queries run?

"idle_in_transaction_session_timeout" it is set as 24hours and 
"idle_session_timeout" set as "0". So my question is , should we set the 
parameter to a lesser value in DB cluster level like ~5minutes or so, so 
as not to keep the idle sessions lying so long in the database and what


'"idle_in_transaction_session_timeout" it is set as 24hours' is a foot 
gun as explained here:


https://www.postgresql.org/docs/current/runtime-config-client.html

idle_in_transaction_session_timeout (integer)

[...]

"This option can be used to ensure that idle sessions do not hold locks 
for an unreasonable amount of time. Even when no significant locks are 
held, an open transaction prevents vacuuming away recently-dead tuples 
that may be visible only to this transaction; so remaining idle for a 
long time can contribute to table bloat. See Section 24.1 for more details."



With '"idle_session_timeout" set as "0"' a session without an open 
transaction is not going to timeout.



would be the advisable value for these parameters?

Regards
Sud


--
Adrian Klaver
adrian.kla...@aklaver.com