Re: thousands of CachedPlan entry per backend

2023-06-01 Thread Pavel Stehule
Hi

čt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe 
napsal:

> On Thu, 2023-06-01 at 03:36 +, James Pang (chaolpan) wrote:
> > PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process
> memory per
> > backend, from Operating system and memorycontext dump “Grand total:”,
> both mached.
> > But from details, we found almost of entry belong to
>  “CacheMemoryContext”,
> > from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944
> free (215 chunks); 7715408 used,
> > but there are thousands of lines of it’s child, the sum of blocks much
> more than “8737352” total in 42 blocks
> >
> > Our application use Postgresql JDBC driver with default
> parameters(maxprepared statement 256),
> > there are many triggers, functions in this database, and a few functions
> run sql by an extension
> > pg_background.  We have thousands of connections and have big concern
> why have thousands of entrys
> > of cached SQL ?  that will consume huge memory ,  anyway to limit the
> cached plan entry to save memory
> > consumption?  Or it looks like an abnormal behavior or bug to see so
> many cached plan lines.
>
> If you have thousands of connections, that's your problem.  You need
> effective connection pooling.
> Then 40MB per backend won't be a problem at all.  Having thousands of
> connections will cause
> other, worse, problems for you.
>
> See for example
>
> https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/
>
> If you want to use functions, but don't want to benefit from plan caching,
> you can set
> the configuration parameter "plan_cache_mode" to "force_custom_plan".
>

The problem with too big of cached metadata can be forced by too long
sessions too.

In this case it is good to throw a session (connect) after 1hour or maybe
less.

Regards

Pavel


>
> Yours,
> Laurenz Albe
>
>
>


RE: thousands of CachedPlan entry per backend

2023-06-01 Thread James Pang (chaolpan)
Yes, too many cached metadata and we are thinking of a workaround to 
disconnect the sessions timely.
In addition, based on the dumped memory context, I have questions
   1) we found thousands of cached plan , since JDBC driver only allow max 256 
cached prepared statements, how backend cache so many sql plans. If we have one 
function,  when application call that function will make backend to cache  
every SQL statement plan in that function too?   and for table triggers, have 
similar caching behavior ?
  2) from  this line, we saw total 42 blocks ,215 chunks  
CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free (215 chunks); 
7715408 used,
  But from sum of it’s child level entrys,  total sum(child lines) block 
,trunks show much more than “CacheMemoryContext,  is expected to see that?


Thanks,

James




From: Pavel Stehule 
Sent: Thursday, June 1, 2023 3:19 PM
To: Laurenz Albe 
Cc: James Pang (chaolpan) ; 
[email protected]
Subject: Re: thousands of CachedPlan entry per backend

Hi

čt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe 
mailto:[email protected]>> napsal:
On Thu, 2023-06-01 at 03:36 +, James Pang (chaolpan) wrote:
> PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process memory 
> per
> backend, from Operating system and memorycontext dump “Grand total:”, both 
> mached.
> But from details, we found almost of entry belong to  “CacheMemoryContext”,
> from this line  CacheMemoryContext: 8737352 total in 42 blocks; 1021944 free 
> (215 chunks); 7715408 used,
> but there are thousands of lines of it’s child, the sum of blocks much more 
> than “8737352” total in 42 blocks
>
> Our application use Postgresql JDBC driver with default 
> parameters(maxprepared statement 256),
> there are many triggers, functions in this database, and a few functions run 
> sql by an extension
> pg_background.  We have thousands of connections and have big concern why 
> have thousands of entrys
> of cached SQL ?  that will consume huge memory ,  anyway to limit the cached 
> plan entry to save memory
> consumption?  Or it looks like an abnormal behavior or bug to see so many 
> cached plan lines.

If you have thousands of connections, that's your problem.  You need effective 
connection pooling.
Then 40MB per backend won't be a problem at all.  Having thousands of 
connections will cause
other, worse, problems for you.

See for example
https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/

If you want to use functions, but don't want to benefit from plan caching, you 
can set
the configuration parameter "plan_cache_mode" to "force_custom_plan".

The problem with too big of cached metadata can be forced by too long sessions 
too.

In this case it is good to throw a session (connect) after 1hour or maybe less.

Regards

Pavel


Yours,
Laurenz Albe



Re: thousands of CachedPlan entry per backend

2023-06-01 Thread Laurenz Albe
On Thu, 2023-06-01 at 08:50 +, James Pang (chaolpan) wrote:
> we found thousands of cached plan , since JDBC driver only allow max 256 
> cached
> prepared statements, how backend cache so many sql plans. If we have one 
> function,
> when application call that function will make backend to cache  every SQL 
> statement
> plan in that function too?   and for table triggers, have similar caching 
> behavior ?

Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.

Yours,
Laurenz Albe




Re: How to reduce latency with fast short queries in Postgresql 15.3 on a NUMA server

2023-06-01 Thread Sergio Rus
Thanks for your replies, you were totally right, it was due to the CPU
governor: the governor was set to 'powersave'. I've changed it to
'performance' and the server is flying now. I'm still working on it,
but the first quick tests I've made are showing much better numbers.
Those simple short queries are running faster now, the latency is now
basically the same or even lower than the old server. The server feels
more responsive overall.

I've finally installed cpupower, to simplify the process, but you can
use basic shell commands. Here are the output for some commands:

# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors
=>
performance powersave

# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
=>
performance

# cpupower -c all frequency-info
=>
analyzing CPU 0:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 0
  CPUs which need to have their frequency coordinated by software: 0
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 1000 MHz - 3.90 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 1000 MHz and 3.90 GHz.
  The governor "performance" may decide which speed to use
  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 1.94 GHz (asserted by call to kernel)
  boost state support:
Supported: yes
Active: yes

analyzing CPU 1:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 1
  CPUs which need to have their frequency coordinated by software: 1
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 1000 MHz - 3.90 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 1000 MHz and 3.90 GHz.
  The governor "performance" may decide which speed to use
  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 1.91 GHz (asserted by call to kernel)
  boost state support:
Supported: yes
Active: yes

analyzing CPU 2:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 2
  CPUs which need to have their frequency coordinated by software: 2
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 1000 MHz - 3.90 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 1000 MHz and 3.90 GHz.
  The governor "performance" may decide which speed to use
  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 2.14 GHz (asserted by call to kernel)
  boost state support:
Supported: yes
Active: yes

... (cropped)

analyzing CPU 9:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 9
  CPUs which need to have their frequency coordinated by software: 9
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 1000 MHz - 3.90 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 1000 MHz and 3.90 GHz.
  The governor "performance" may decide which speed to use
  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 2.95 GHz (asserted by call to kernel)
  boost state support:
Supported: yes
Active: yes

... (cropped)

analyzing CPU 26:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 26
  CPUs which need to have their frequency coordinated by software: 26
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 1000 MHz - 3.90 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 1000 MHz and 3.90 GHz.
  The governor "performance" may decide which speed to use
  within this range.
  current CPU frequency: Unable to call hardware
  current CPU frequency: 1.00 GHz (asserted by call to kernel)
  boost state support:
Supported: yes
Active: yes

analyzing CPU 27:
  driver: intel_pstate
  CPUs which run at the same hardware frequency: 27
  CPUs which need to have their frequency coordinated by software: 27
  maximum transition latency:  Cannot determine or is not supported.
  hardware limits: 1000 MHz - 3.90 GHz
  available cpufreq governors: performance powersave
  current policy: frequency should be within 1000 MHz and 3.90 GHz.
  The governor "performance" may de

RE: thousands of CachedPlan entry per backend

2023-06-01 Thread James Pang (chaolpan)
   these lines about "SPI Plan" are these PL/PGSQL functions related 
SPI_prepare plan entry, right?   Possible to set a GUC to max(cached plan) per 
backend ? 

SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used
CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used: xxx
CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used: 
xxx
  CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344 used

Thanks,

James

-Original Message-
From: Laurenz Albe  
Sent: Thursday, June 1, 2023 8:48 PM
To: James Pang (chaolpan) ; Pavel Stehule 

Cc: [email protected]
Subject: Re: thousands of CachedPlan entry per backend

On Thu, 2023-06-01 at 08:50 +, James Pang (chaolpan) wrote:
> we found thousands of cached plan , since JDBC driver only allow max 
> 256 cached prepared statements, how backend cache so many sql plans. 
> If we have one function, when application call that function will make 
> backend to cache  every SQL statement plan in that function too?   and for 
> table triggers, have similar caching behavior ?

Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.

Yours,
Laurenz Albe


Re: thousands of CachedPlan entry per backend

2023-06-01 Thread Pavel Stehule
pá 2. 6. 2023 v 3:45 odesílatel James Pang (chaolpan) 
napsal:

>these lines about "SPI Plan" are these PL/PGSQL functions related
> SPI_prepare plan entry, right?   Possible to set a GUC to max(cached plan)
> per backend ?
>

There is no limit for size of system cache. You can use pgbouncer that
implicitly refresh session after 1 hour (and this limit can be reduced)

Regards

Pavel




>
> SPI Plan: 1024 total in 1 blocks; 600 free (0 chunks); 424 used
> CachedPlan: 2048 total in 2 blocks; 304 free (1 chunks); 1744 used:
> xxx
> CachedPlanSource: 2048 total in 2 blocks; 200 free (0 chunks); 1848
> used: xxx
>   CachedPlanQuery: 2048 total in 2 blocks; 704 free (0 chunks); 1344
> used
>
> Thanks,
>
> James
>
> -Original Message-
> From: Laurenz Albe 
> Sent: Thursday, June 1, 2023 8:48 PM
> To: James Pang (chaolpan) ; Pavel Stehule <
> [email protected]>
> Cc: [email protected]
> Subject: Re: thousands of CachedPlan entry per backend
>
> On Thu, 2023-06-01 at 08:50 +, James Pang (chaolpan) wrote:
> > we found thousands of cached plan , since JDBC driver only allow max
> > 256 cached prepared statements, how backend cache so many sql plans.
> > If we have one function, when application call that function will make
> > backend to cache  every SQL statement plan in that function too?   and
> for table triggers, have similar caching behavior ?
>
> Yes, as long as the functions are written in PL/pgSQL.
> It only affects static SQL, that is, nothing that is run with EXECUTE.
>
> Yours,
> Laurenz Albe
>