Re: ERROR: too many dynamic shared memory segments

2020-01-29 Thread Nicola Contu
Hello, may I ask you for a feedback?
Thanks a lot

Il giorno mar 21 gen 2020 alle ore 17:14 Nicola Contu <
nicola.co...@gmail.com> ha scritto:

> We also reverted this param :
>
> cmdv3=# show max_parallel_workers_per_gather;
>  max_parallel_workers_per_gather
> -
>  2
> (1 row)
>
> It was set to 8.
>
>
> Il giorno mar 21 gen 2020 alle ore 16:06 Nicola Contu <
> nicola.co...@gmail.com> ha scritto:
>
>> Hey Thomas,
>> after a few months, we started having this issue again.
>> So we revert the work_mem parameter to 600MB instead of 2GB.
>> But the issue is still there. A query went to segmentation fault, the DB
>> went to recovery mode and our app went to read only for a few minutes.
>>
>> I understand we can increase max_connections so we can have many more
>> segments.
>>
>> My question is : is there a way to understand the number of segments we
>> reached?
>> Currently we have 220 max_conn so as your formula is 64 + 2* 220 we have
>> about 500 shared segments.
>> We would like to increase that number to 300 or 400 but would be great to
>> understand if there is a way to make sure we will solve the issue as it
>> requires a restart of the service.
>>
>> I know you were also talking about a redesign this part in PostgreSQL. Do
>> you know if anything has changed in any of the newer versions after 11.5?
>>
>> Thanks a lot,
>> Nicola
>>
>>
>>
>>
>>
>> Il giorno gio 12 set 2019 alle ore 01:01 Thomas Munro <
>> thomas.mu...@gmail.com> ha scritto:
>>
>>> On Wed, Sep 11, 2019 at 11:20 PM Nicola Contu 
>>> wrote:
>>> > If the error persist I will try to revert the work_mem.
>>> > Thanks a lot
>>>
>>> Hi Nicola,
>>>
>>> It's hard to say exactly what the cause of the problem is in your case
>>> and how to avoid it, without knowing what your query plans look like.
>>> PostgreSQL allows 64 + 2 * max_connections segments to exist a time,
>>> and it needs a number of them that depends on work_mem (in the case of
>>> Parallel Hash Join and Parallel Bitmap Index Scan), and also depends
>>> on the number of Gather nodes that appear in the plan, which in some
>>> unusual cases can result from partitioning.
>>>
>>> I've seen people reaching this error by running a lot of parallel
>>> queries concurrently.  If that's the cause, then you can definitely
>>> get some relief by turning work_mem down, or by turning
>>> max_connections up (even though you don't want to allow more
>>> connections -- because it influences the formula for deciding on the
>>> DSM segment limit).  We should probably adjust some of the internal
>>> constants to give us more slots, to avoid that problem, as discussed
>>> here:
>>>
>>>
>>> https://www.postgresql.org/message-id/flat/CA%2BhUKGL6H2BpGbiF7Lj6QiTjTGyTLW_vLR%3DSn2tEBeTcYXiMKw%40mail.gmail.com
>>>
>>> I've also seen people reaching this error by somehow coming up with
>>> plans that had a very large number of Gather nodes in them,
>>> corresponding to partitions; that's probably a bad plan (it'd
>>> presumably be better to terminate parallelism higher up in the plan,
>>> but these plans do seem to exist in the wild; I don't recall exactly
>>> why).  I think we need a bit of a redesign so that if there are
>>> multiple Gather nodes, they share the same main DSM segment, instead
>>> of blowing through this limit.
>>>
>>> --
>>> Thomas Munro
>>> https://enterprisedb.com
>>>
>>


Re: ERROR: too many dynamic shared memory segments

2020-01-29 Thread Thomas Munro
On Wed, Jan 22, 2020 at 4:06 AM Nicola Contu  wrote:
> after a few months, we started having this issue again.
> So we revert the work_mem parameter to 600MB instead of 2GB.
> But the issue is still there. A query went to segmentation fault, the DB went 
> to recovery mode and our app went to read only for a few minutes.

Hi Nicola,
Hmm, a segmentation fault sounds like a different problem.  Can you
please share the exact error messages from PostgreSQL and OS logs?

> I understand we can increase max_connections so we can have many more 
> segments.
>
> My question is : is there a way to understand the number of segments we 
> reached?

If you're on Linux, you can probably see them with "ls /dev/shm".

> Currently we have 220 max_conn so as your formula is 64 + 2* 220 we have 
> about 500 shared segments.
> We would like to increase that number to 300 or 400 but would be great to 
> understand if there is a way to make sure we will solve the issue as it 
> requires a restart of the service.
>
> I know you were also talking about a redesign this part in PostgreSQL. Do you 
> know if anything has changed in any of the newer versions after 11.5?

It's possible that we should increase a couple of constants used the
formula -- I'll look into that again.  But first I'd like to see if
we're even investigating the right problem here.




Re: ERROR: too many dynamic shared memory segments

2020-01-29 Thread Nicola Contu
This is the error on postgres log of the segmentation fault :

2020-01-21 14:20:29 GMT [] [4]: [108-1] db=,user= LOG:  server process
(PID 2042) was terminated by signal 11: Segmentation fault
2020-01-21 14:20:29 GMT [] [4]: [109-1] db=,user= DETAIL:  Failed
process was running: select pid from pg_stat_activity where query ilike
'REFRESH MATERIALIZED VIEW CONCURRENTLY matview_vrs_request_stats'
2020-01-21 14:20:29 GMT [] [4]: [110-1] db=,user= LOG:  terminating any
other active server processes
2020-01-21 14:20:29 GMT [127.0.0.1(34256)] [2724]: [4-1]
db=cmdv3,user=admin WARNING:  terminating connection because of crash of
another server process
2020-01-21 14:20:29 GMT [127.0.0.1(34256)] [2724]: [5-1]
db=cmdv3,user=admin DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2020-01-21 14:20:29 GMT [127.0.0.1(34256)] [2724]: [6-1]
db=cmdv3,user=admin HINT:  In a moment you should be able to reconnect to
the database and repeat your command.
2020-01-21 14:20:29 GMT [127.0.0.1(34026)] [2055]: [5-1]
db=cmdv3,user=admin WARNING:  terminating connection because of crash of
another server process

At CentOS level :
Jan 21 14:20:29 usnyh-cmd1.gt-t.net kernel: postmaster[2042]: segfault at 0
ip 0048bef4 sp 7ffdf4955bb0 error 4 in postgres[40+6c5000]
Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.905 GMT
[33727] WARNING C-0x21526e8: cmdv3/admin@10.151.2.154:39688 pooler error:
server conn crashed?
Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.905 GMT
[33727] WARNING S-0x1f5ff58: cmdv3/admin@127.0.0.1:5432 got packet 'N' from
server when not linked
Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.905 GMT
[33727] WARNING S-0x25097f0: cmdv3/admin@127.0.0.1:5432 got packet 'N' from
server when not linked
Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.906 GMT
[33727] WARNING S-0x2508b60: cmdv3/admin@127.0.0.1:5432 got packet 'N' from
server when not linked
Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.981 GMT
[33727] ERROR S: login failed: FATAL: the database system is in recovery
mode


> If you're on Linux, you can probably see them with "ls /dev/shm".

I see a lot of files there, and doing a cat they are empty. What can I do
with them?

Those are two different problems I guess, but they are related because
right before the Segmentation Fault I see a lot of shared segment errors in
the postgres log.

Il giorno mer 29 gen 2020 alle ore 10:09 Thomas Munro <
thomas.mu...@gmail.com> ha scritto:

> On Wed, Jan 22, 2020 at 4:06 AM Nicola Contu 
> wrote:
> > after a few months, we started having this issue again.
> > So we revert the work_mem parameter to 600MB instead of 2GB.
> > But the issue is still there. A query went to segmentation fault, the DB
> went to recovery mode and our app went to read only for a few minutes.
>
> Hi Nicola,
> Hmm, a segmentation fault sounds like a different problem.  Can you
> please share the exact error messages from PostgreSQL and OS logs?
>
> > I understand we can increase max_connections so we can have many more
> segments.
> >
> > My question is : is there a way to understand the number of segments we
> reached?
>
> If you're on Linux, you can probably see them with "ls /dev/shm".
>
> > Currently we have 220 max_conn so as your formula is 64 + 2* 220 we have
> about 500 shared segments.
> > We would like to increase that number to 300 or 400 but would be great
> to understand if there is a way to make sure we will solve the issue as it
> requires a restart of the service.
> >
> > I know you were also talking about a redesign this part in PostgreSQL.
> Do you know if anything has changed in any of the newer versions after 11.5?
>
> It's possible that we should increase a couple of constants used the
> formula -- I'll look into that again.  But first I'd like to see if
> we're even investigating the right problem here.
>


Re: ERROR: too many dynamic shared memory segments

2020-01-29 Thread Julian Backes
Hi,

we only had the "too many shared too many dynamic shared memory segments"
error but no segmentation faults. The error started occurring after
upgrading from postgres 10 to postgres 12 (server has 24 cores / 48
threads, i.e. many parallel workers). The error itself was not that much of
a problem but /dev/shm started filling up with orphaned files which
probably (?) had not been cleaned up by postgres after the parallel workers
died. In consequence, after some time, /dev/shm was full and everything
crashed.

Unfortunately, the only "solution" we found so far was to increase max
connections from 100 to 1000. After that (about 2 months ago I think), the
error had gone.

Maybe this helps...

Julian

Am Mi., 29. Jan. 2020 um 10:37 Uhr schrieb Nicola Contu <
nicola.co...@gmail.com>:

> This is the error on postgres log of the segmentation fault :
>
> 2020-01-21 14:20:29 GMT [] [4]: [108-1] db=,user= LOG:  server process
> (PID 2042) was terminated by signal 11: Segmentation fault
> 2020-01-21 14:20:29 GMT [] [4]: [109-1] db=,user= DETAIL:  Failed
> process was running: select pid from pg_stat_activity where query ilike
> 'REFRESH MATERIALIZED VIEW CONCURRENTLY matview_vrs_request_stats'
> 2020-01-21 14:20:29 GMT [] [4]: [110-1] db=,user= LOG:  terminating
> any other active server processes
> 2020-01-21 14:20:29 GMT [127.0.0.1(34256)] [2724]: [4-1]
> db=cmdv3,user=admin WARNING:  terminating connection because of crash of
> another server process
> 2020-01-21 14:20:29 GMT [127.0.0.1(34256)] [2724]: [5-1]
> db=cmdv3,user=admin DETAIL:  The postmaster has commanded this server
> process to roll back the current transaction and exit, because another
> server process exited abnormally and possibly corrupted shared memory.
> 2020-01-21 14:20:29 GMT [127.0.0.1(34256)] [2724]: [6-1]
> db=cmdv3,user=admin HINT:  In a moment you should be able to reconnect to
> the database and repeat your command.
> 2020-01-21 14:20:29 GMT [127.0.0.1(34026)] [2055]: [5-1]
> db=cmdv3,user=admin WARNING:  terminating connection because of crash of
> another server process
>
> At CentOS level :
> Jan 21 14:20:29 usnyh-cmd1.gt-t.net kernel: postmaster[2042]: segfault at
> 0 ip 0048bef4 sp 7ffdf4955bb0 error 4 in postgres[40+6c5000]
> Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.905
> GMT [33727] WARNING C-0x21526e8: cmdv3/admin@10.151.2.154:39688 pooler
> error: server conn crashed?
> Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.905
> GMT [33727] WARNING S-0x1f5ff58: cmdv3/admin@127.0.0.1:5432 got packet
> 'N' from server when not linked
> Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.905
> GMT [33727] WARNING S-0x25097f0: cmdv3/admin@127.0.0.1:5432 got packet
> 'N' from server when not linked
> Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.906
> GMT [33727] WARNING S-0x2508b60: cmdv3/admin@127.0.0.1:5432 got packet
> 'N' from server when not linked
> Jan 21 14:20:29 usnyh-cmd1.gt-t.net pgbouncer: 2020-01-21 14:20:29.981
> GMT [33727] ERROR S: login failed: FATAL: the database system is in
> recovery mode
>
>
> > If you're on Linux, you can probably see them with "ls /dev/shm".
>
> I see a lot of files there, and doing a cat they are empty. What can I do
> with them?
>
> Those are two different problems I guess, but they are related because
> right before the Segmentation Fault I see a lot of shared segment errors in
> the postgres log.
>
> Il giorno mer 29 gen 2020 alle ore 10:09 Thomas Munro <
> thomas.mu...@gmail.com> ha scritto:
>
>> On Wed, Jan 22, 2020 at 4:06 AM Nicola Contu 
>> wrote:
>> > after a few months, we started having this issue again.
>> > So we revert the work_mem parameter to 600MB instead of 2GB.
>> > But the issue is still there. A query went to segmentation fault, the
>> DB went to recovery mode and our app went to read only for a few minutes.
>>
>> Hi Nicola,
>> Hmm, a segmentation fault sounds like a different problem.  Can you
>> please share the exact error messages from PostgreSQL and OS logs?
>>
>> > I understand we can increase max_connections so we can have many more
>> segments.
>> >
>> > My question is : is there a way to understand the number of segments we
>> reached?
>>
>> If you're on Linux, you can probably see them with "ls /dev/shm".
>>
>> > Currently we have 220 max_conn so as your formula is 64 + 2* 220 we
>> have about 500 shared segments.
>> > We would like to increase that number to 300 or 400 but would be great
>> to understand if there is a way to make sure we will solve the issue as it
>> requires a restart of the service.
>> >
>> > I know you were also talking about a redesign this part in PostgreSQL.
>> Do you know if anything has changed in any of the newer versions after 11.5?
>>
>> It's possible that we should increase a couple of constants used the
>> formula -- I'll look into that again.  But first I'd like to see if
>> we're even investigating the right problem here.
>>

Re: ERROR: too many dynamic shared memory segments

2020-01-29 Thread Thomas Munro
On Wed, Jan 29, 2020 at 10:37 PM Nicola Contu  wrote:
> This is the error on postgres log of the segmentation fault :
>
> 2020-01-21 14:20:29 GMT [] [4]: [108-1] db=,user= LOG:  server process 
> (PID 2042) was terminated by signal 11: Segmentation fault
> 2020-01-21 14:20:29 GMT [] [4]: [109-1] db=,user= DETAIL:  Failed process 
> was running: select pid from pg_stat_activity where query ilike 'REFRESH 
> MATERIALIZED VIEW CONCURRENTLY matview_vrs_request_stats'
> 2020-01-21 14:20:29 GMT [] [4]: [110-1] db=,user= LOG:  terminating any 
> other active server processes

Ok, this is a bug.  Do you happen to have a core file?  I don't recall
where CentOS puts them.

> > If you're on Linux, you can probably see them with "ls /dev/shm".
>
> I see a lot of files there, and doing a cat they are empty. What can I do 
> with them?

Not much, but it tells you approximately how many 'slots' are in use
at a given time (ie because of currently running parallel queries), if
they were created since PostgreSQL started up (if they're older ones
they could have leaked from a crashed server, but we try to avoid that
by trying to clean them up when you restart).

> Those are two different problems I guess, but they are related because right 
> before the Segmentation Fault I see a lot of shared segment errors in the 
> postgres log.

That gave me an idea...  I hacked my copy of PostgreSQL to flip a coin
to decide whether to pretend there are no slots free (see below), and
I managed to make it crash in the regression tests when doing a
parallel index build.  It's late here now, but I'll look into that
tomorrow.  It's possible that the parallel index code needs to learn
to cope with that.

#2  0x00a096f6 in SharedFileSetInit (fileset=0x80b2fe14c,
seg=0x0) at sharedfileset.c:71
#3  0x00c72440 in tuplesort_initialize_shared
(shared=0x80b2fe140, nWorkers=2, seg=0x0) at tuplesort.c:4341
#4  0x005ab405 in _bt_begin_parallel
(buildstate=0x7fffc070, isconcurrent=false, request=1) at
nbtsort.c:1402
#5  0x005aa7c7 in _bt_spools_heapscan (heap=0x801ddd7e8,
index=0x801dddc18, buildstate=0x7fffc070, indexInfo=0x80b2b62d0)
at nbtsort.c:396
#6  0x005aa695 in btbuild (heap=0x801ddd7e8,
index=0x801dddc18, indexInfo=0x80b2b62d0) at nbtsort.c:328
#7  0x00645b5c in index_build (heapRelation=0x801ddd7e8,
indexRelation=0x801dddc18, indexInfo=0x80b2b62d0, isreindex=false,
parallel=true) at index.c:2879
#8  0x00643e5c in index_create (heapRelation=0x801ddd7e8,
indexRelationName=0x7fffc510 "pg_toast_24587_index",
indexRelationId=24603, parentIndexRelid=0,

I don't know if that's the bug that you're hitting, but it definitely
could be: REFRESH MATERIALIZED VIEW could be rebuilding an index.

===

diff --git a/src/backend/storage/ipc/dsm.c b/src/backend/storage/ipc/dsm.c
index 90e0d739f8..f0b49d94ee 100644
--- a/src/backend/storage/ipc/dsm.c
+++ b/src/backend/storage/ipc/dsm.c
@@ -468,6 +468,13 @@ dsm_create(Size size, int flags)
nitems = dsm_control->nitems;
for (i = 0; i < nitems; ++i)
{
+   /* BEGIN HACK */
+   if (random() % 10 > 5)
+   {
+   nitems = dsm_control->maxitems;
+   break;
+   }
+   /* END HACK */
if (dsm_control->item[i].refcnt == 0)
{
dsm_control->item[i].handle = seg->handle;




Re: ERROR: too many dynamic shared memory segments

2020-01-29 Thread Thomas Munro
On Wed, Jan 29, 2020 at 11:24 PM Julian Backes  wrote:
> we only had the "too many shared too many dynamic shared memory segments" 
> error but no segmentation faults. The error started occurring after upgrading 
> from postgres 10 to postgres 12 (server has 24 cores / 48 threads, i.e. many 
> parallel workers). The error itself was not that much of a problem but 
> /dev/shm started filling up with orphaned files which probably (?) had not 
> been cleaned up by postgres after the parallel workers died. In consequence, 
> after some time, /dev/shm was full and everything crashed.

Oh, thanks for the report.  I think see what was happening there, and
it's a third independent problem.  The code in dsm_create() does
DSM_OP_DESTROY (ie cleans up) in the DSM_CREATE_NULL_IF_MAXSEGMENTS
case, but in the case where you see "ERROR: too many dynamic shared
memory segments" it completely fails to clean up after itself.  I can
reproduce that here.  That's a terrible bug, and has been sitting in
the tree for 5 years.

> Unfortunately, the only "solution" we found so far was to increase max 
> connections from 100 to 1000. After that (about 2 months ago I think), the 
> error had gone.

I'll take that as a vote for increasing the number of slots.




Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-29 Thread Achilleas Mantzios

Hello Dear Postgresql ppl,
I have a table with date ranges and need to express the following constraint : 
allow overlaps only if there is complete containment, e.g.
allow values in rows like :
[2020-01-01,2020-01-31)
[2020-01-02,2020-01-10)
[2020-01-10,2020-01-20)

but disallow rows like

[2020-01-02,2020-01-10)
[2020-01-08,2020-01-11)

I think that writing a new commutative range operator e.g. |<@@> which would return true if the left operand is either contained by or contains the right operand and false otherwise would solve this, 
I am just wondering if there is a more elegant and economical way to express this. (besides writing a trigger which is always an option).|


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Perumal Raj
Hi All,

We have recently upgraded postgres from 9.2 to 11.6 and started seeing
performance issue immediately and  able to fix the performance issue after
disabling parameter: enable_seqscan.

Question :
Should i keep the above parameter always disabled ? If not why the behavior
changed in Higher version ?

Note:
Table ANALYZE completed as part of Upgrade activity.

Thanks
Raj


Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Adrian Klaver

On 1/29/20 9:39 AM, Perumal Raj wrote:

Hi All,

We have recently upgraded postgres from 9.2 to 11.6 and started seeing 
performance issue immediately and  able to fix the performance issue 
after disabling parameter: enable_seqscan.


Question :
Should i keep the above parameter always disabled ? If not why the 
behavior changed in Higher version ?


Without an explain analyze of a representative query it would be hard to 
say.


Also the schema of the the tables involved would be helpful.



Note:
Table ANALYZE completed as part of Upgrade activity.


Was this on a single table or all tables?



Thanks
Raj



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Tom Lane
Perumal Raj  writes:
> We have recently upgraded postgres from 9.2 to 11.6 and started seeing
> performance issue immediately and  able to fix the performance issue after
> disabling parameter: enable_seqscan.
> Question :
> Should i keep the above parameter always disabled ? If not why the behavior
> changed in Higher version ?

This is unanswerable with the amount of information you've given.
Yes, turning off enable_seqscan is a bad idea in general, but why
you got a worse plan without that requires details.

https://wiki.postgresql.org/wiki/Slow_Query_Questions

regards, tom lane




Re: Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-29 Thread Adrian Klaver

On 1/29/20 8:12 AM, Achilleas Mantzios wrote:

Hello Dear Postgresql ppl,
I have a table with date ranges and need to express the following 
constraint : allow overlaps only if there is complete containment, e.g.

allow values in rows like :
[2020-01-01,2020-01-31)
[2020-01-02,2020-01-10)
[2020-01-10,2020-01-20)

but disallow rows like

[2020-01-02,2020-01-10)
[2020-01-08,2020-01-11)


I'm missing something. Can you provide a more complete example?



I think that writing a new commutative range operator e.g. |<@@> which 
would return true if the left operand is either contained by or contains 
the right operand and false otherwise would solve this, I am just 
wondering if there is a more elegant and economical way to express this. 
(besides writing a trigger which is always an option).|


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-29 Thread Michael Lewis
If the new value does not overlap with any existing, allow. If it does
overlap, then it must be fully contained by the existing element, or the
existing element must be fully contained by it. Else, reject. Is that right?


Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Perumal Raj
Hi Tom /Adrian,

Issue is not specific to a table or particular Query. Also there is no
change in DB parameter after upgrade.

That the only way i can make it most of the the query to run as like before
upgrade.

Note:
Some web reference says , Engine will take some time to adjust until it
runs autovacuum .




On Wed, Jan 29, 2020 at 10:22 AM Tom Lane  wrote:

> Perumal Raj  writes:
> > We have recently upgraded postgres from 9.2 to 11.6 and started seeing
> > performance issue immediately and  able to fix the performance issue
> after
> > disabling parameter: enable_seqscan.
> > Question :
> > Should i keep the above parameter always disabled ? If not why the
> behavior
> > changed in Higher version ?
>
> This is unanswerable with the amount of information you've given.
> Yes, turning off enable_seqscan is a bad idea in general, but why
> you got a worse plan without that requires details.
>
> https://wiki.postgresql.org/wiki/Slow_Query_Questions
>
> regards, tom lane
>


Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Peter J. Holzer
On 2020-01-29 09:39:03 -0800, Perumal Raj wrote:
> We have recently upgraded postgres from 9.2 to 11.6 and started seeing
> performance issue immediately and  able to fix the performance issue after
> disabling parameter: enable_seqscan.

How did you upgrade?

If your upgrade involved a dump and restore, you should invoke ANALYZE
for each database (I think autovacuum will analyze all tables
eventually, but takes its time).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Perumal Raj
Hi Peter,

I strongly i agree,

I have used pg_upgrade which runs "analyze_new_cluster.sh" as post upgrade
activity across all DB on cluster.
Also, I have executed manual vacuum on all individual tables. However the
behavior is same until i disable the above said parameter.

Regards,
Raj


On Wed, Jan 29, 2020 at 2:33 PM Peter J. Holzer  wrote:

> On 2020-01-29 09:39:03 -0800, Perumal Raj wrote:
> > We have recently upgraded postgres from 9.2 to 11.6 and started seeing
> > performance issue immediately and  able to fix the performance issue
> after
> > disabling parameter: enable_seqscan.
>
> How did you upgrade?
>
> If your upgrade involved a dump and restore, you should invoke ANALYZE
> for each database (I think autovacuum will analyze all tables
> eventually, but takes its time).
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Adrian Klaver

On 1/29/20 1:58 PM, Perumal Raj wrote:

Hi Tom /Adrian,

Issue is not specific to a table or particular Query. Also there is no 
change in DB parameter after upgrade.


You obviously are seeing some difference, so pick a query and do an 
EXPLAIN ANALYZE on it. Without some actual performance data this issue 
is not going to be solved.




That the only way i can make it most of the the query to run as like 
before upgrade.


Note:
Some web reference says , Engine will take some time to adjust until it 
runs autovacuum .


The above assumes you have not done a manual ANALYZE on the database, as 
ANALYZE is done as part of autovacuum.







On Wed, Jan 29, 2020 at 10:22 AM Tom Lane > wrote:


Perumal Raj mailto:peruci...@gmail.com>> writes:
 > We have recently upgraded postgres from 9.2 to 11.6 and started
seeing
 > performance issue immediately and  able to fix the performance
issue after
 > disabling parameter: enable_seqscan.
 > Question :
 > Should i keep the above parameter always disabled ? If not why
the behavior
 > changed in Higher version ?

This is unanswerable with the amount of information you've given.
Yes, turning off enable_seqscan is a bad idea in general, but why
you got a worse plan without that requires details.

https://wiki.postgresql.org/wiki/Slow_Query_Questions

                         regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Performance Issue after upgrade from 9 to 11

2020-01-29 Thread Tom Lane
Perumal Raj  writes:
> I have used pg_upgrade which runs "analyze_new_cluster.sh" as post upgrade
> activity across all DB on cluster.

pg_upgrade itself won't run that script, it only creates it for you
to run.

> Also, I have executed manual vacuum on all individual tables. However the
> behavior is same until i disable the above said parameter.

Did those manual vacuums include an "analyze" option?

If you in fact haven't got analyze results on your tables, that might
explain your problems.  autovacuum would eventually rectify the situation
for larger tables, but it might never get around to small seldom-updated
tables.

If that's not it, I again direct your attention to the
Slow_Query_Questions wiki page.  That summarizes a lot of past experience
with solving performance problems, and you're not endearing yourself to
potential helpers by refusing to follow the process.

regards, tom lane




Re: Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-29 Thread Achilleas Mantzios

On 29/1/20 8:43 μ.μ., Michael Lewis wrote:

If the new value does not overlap with any existing, allow. If it does 
overlap, then it must be fully contained by the existing element, or 
the existing element must be fully contained by it. Else, reject. Is 
that right?



Exactly.