Re: SubtransControlLock and performance problems

2020-02-18 Thread Laurenz Albe
On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote:
> I tested 
> 
> CREATE OR REPLACE FUNCTION public.fx(integer)
>  RETURNS void
>  LANGUAGE plpgsql
> AS $function$
> begin
>   for i in 1..$1 loop
>   begin
> insert into foo values(i);
> exception when others then
>   raise notice 'yyy';
> end;
> end loop;
> end;
> $function$
> 
> and I don't see any significant difference between numbers less than 64 and 
> higher

Did you have several concurrent sessions accessing the rows that others created?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: DB running out of memory issues after upgrade

2020-02-18 Thread Tomas Vondra

On Tue, Feb 18, 2020 at 05:46:28PM +, Nagaraj Raj wrote:

after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues 
no world load has changed before and after upgrade. 

spec: RAM 16gb,4vCore
Any bug reported like this or suggestions on how to fix this issue? I 
appreciate the response..!! 



This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

 (gdb) p MemoryContextStats(TopMemoryContext)
 (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: DB running out of memory issues after upgrade

2020-02-18 Thread Nagaraj Raj
 Below are the same configurations ins .conf file before and after updagrade
show max_connections; = 1743show shared_buffers = "4057840kB"show 
effective_cache_size =  "8115688kB"show maintenance_work_mem = "259MB"show 
checkpoint_completion_target = "0.9"show wal_buffers = "16MB"show 
default_statistics_target = "100"show random_page_cost = "1.1"show 
effective_io_concurrency =" 200"show work_mem = "4MB"show min_wal_size = 
"256MB"show max_wal_size = "2GB"show max_worker_processes = "8"show 
max_parallel_workers_per_gather = "2"

here is some sys logs,
2020-02-16 21:01:17 UTC [-]The database process was killed by the OS 
due to excessive memory consumption. 2020-02-16 13:41:16 UTC [-]The 
database process was killed by the OS due to excessive memory consumption. 

I identified one simple select which consuming more memory and here is the 
query plan,


"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)""  ->  Append  
(cost=0.00..47480080.61 rows=3160784900 width=288)""        ->  Seq Scan on 
msghist  (cost=0.00..15682777.12 rows=312949 width=288)""              
Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)""        
->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 
width=288)""              Filter: (((data -> 'info'::text) ->> 'status'::text) 
= 'CLOSE'::text)"


Thanks,


On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra 
 wrote:  
 
 On Tue, Feb 18, 2020 at 05:46:28PM +, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues 
>no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I 
>appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 


  

Re: DB running out of memory issues after upgrade

2020-02-18 Thread Merlin Moncure
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj  wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin




Re: DB running out of memory issues after upgrade

2020-02-18 Thread Justin Pryzby
Please don't cross post to different lists.

Pgsql-general ,
PgAdmin Support ,
PostgreSQL Hackers ,
"[email protected]" 
,
Postgres Performance List ,
Pg Bugs ,
Pgsql-admin ,
Pgadmin-hackers ,
PostgreSQL Hackers ,
Pgsql-pkg-yum 


On Tue, Feb 18, 2020 at 05:46:28PM +, Nagaraj Raj wrote:
> after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues 
> no world load has changed before and after upgrade. 
> 
> spec: RAM 16gb,4vCore

On Tue, Feb 18, 2020 at 06:10:08PM +, Nagaraj Raj wrote:
> Below are the same configurations ins .conf file before and after updagrade
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show work_mem = "4MB"
> show maintenance_work_mem = "259MB"

> Any bug reported like this or suggestions on how to fix this issue? I 
> appreciate the response..!! 
> 
> I could see below error logs and due to this reason database more often going 
> into recovery mode, 

What do you mean "more often" ?  Did the crash/OOM happen before the upgrade, 
too ?

> 2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID32731) was 
> terminated by signal 9: Killed
> 2020-02-17 22:34:32 UTC::@:[20467]:DETAIL:Failed process was running: 
> selectinfo_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageidfrom
>  salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1)

That process is the one which was killed (in this case) but maybe not the
process responsible for using lots of *private* RAM.  Is
salesdb.liveperson.intents a view ?  What is the query plain for that query ?
(Run it with "explain").
https://wiki.postgresql.org/wiki/SlowQueryQuestions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems

On Tue, Feb 18, 2020 at 06:10:08PM +, Nagaraj Raj wrote:
> I identified one simple select which consuming more memory and here is the 
> query plan,
> 
> "Result  (cost=0.00..94891854.11 rows=3160784900 width=288)""  ->  Append  
> (cost=0.00..47480080.61 rows=3160784900 width=288)""        ->  Seq Scan on 
> msghist  (cost=0.00..15682777.12 rows=312949 width=288)""              
> Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)""       
>  ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 
> width=288)""              Filter: (((data -> 'info'::text) ->> 
> 'status'::text) = 'CLOSE'::text)"

This is almost certainly unrelated.  It looks like that query did a seq scan
and accessed a large number of tuples (and pages from "shared_buffers"), which
the OS then shows as part of that processes memory, even though *shared*
buffers are not specific to that one process.

-- 
Justin




Re: DB running out of memory issues after upgrade

2020-02-18 Thread Merlin Moncure
On Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby  wrote:
> This is almost certainly unrelated.  It looks like that query did a seq scan
> and accessed a large number of tuples (and pages from "shared_buffers"), which
> the OS then shows as part of that processes memory, even though *shared*
> buffers are not specific to that one process.

Yeah.  This server looks highly overprovisioned, I'm in particularly
suspicious of the high max_connections setting.   To fetch this out
I'd be tracking connections in the database, both idle and not idle,
continuously.   The solution is most likely to install a connection
pooler such as pgbouncer.

merlin




Re: DB running out of memory issues after upgrade

2020-02-18 Thread Nagaraj Raj
 Hi Merlin,
Its configured high value for max_conn, but active and idle session have never 
crossed the count 50.
DB Size: 20 GBTable size: 30MBRAM: 16GBvC: 4

yes, its view earlier I posted and here is there query planner for new actual 
view,
"Append  (cost=0.00..47979735.57 rows=3194327000 width=288)""  ->  Seq Scan on 
msghist  (cost=0.00..15847101.30 rows=316270 width=288)""  ->  Seq Scan on 
msghist msghist_1  (cost=0.00..189364.27 rows=31627000 width=288)"

Thanks,RjOn Tuesday, February 18, 2020, 10:51:02 AM PST, Merlin Moncure 
 wrote:  
 
 On Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby  wrote:
> This is almost certainly unrelated.  It looks like that query did a seq scan
> and accessed a large number of tuples (and pages from "shared_buffers"), which
> the OS then shows as part of that processes memory, even though *shared*
> buffers are not specific to that one process.

Yeah.  This server looks highly overprovisioned, I'm in particularly
suspicious of the high max_connections setting.  To fetch this out
I'd be tracking connections in the database, both idle and not idle,
continuously.  The solution is most likely to install a connection
pooler such as pgbouncer.

merlin
  

Re: SubtransControlLock and performance problems

2020-02-18 Thread Pavel Stehule
út 18. 2. 2020 v 18:27 odesílatel Laurenz Albe 
napsal:

> On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote:
> > I tested
> >
> > CREATE OR REPLACE FUNCTION public.fx(integer)
> >  RETURNS void
> >  LANGUAGE plpgsql
> > AS $function$
> > begin
> >   for i in 1..$1 loop
> >   begin
> > insert into foo values(i);
> > exception when others then
> >   raise notice 'yyy';
> > end;
> > end loop;
> > end;
> > $function$
> >
> > and I don't see any significant difference between numbers less than 64
> and higher
>
> Did you have several concurrent sessions accessing the rows that others
> created?
>

no, I didn't

Pavel


> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


tablespace to benefit from ssd ?

2020-02-18 Thread Nicolas PARIS
Hi

I have both hdd and ssd disk on the postgres server. The cluster is
right now created on the hdd only. I am considering using a tablespace
to put some highly used postgres object on the ssd disk. Of course the
ssd is small compared to the hdd, and I need to choose carefully what
objects are stored on that side.

I am wondering what kind of object (indexes, data) would benefit from
ssd. The database primary/foreign keys are highly used and there is
almost no sequencial scan. However the server has a large amount of ram
memory and I suspect all of those indexes are already cached in ram.

I have read that tablespaces introduce overhead of maintenance and
introduce complication for replication. But on the other hand I have
this ssd disk ready for something.

Any recommandation ?

-- 
nicolas paris