Re: server hardware tuning.

2019-02-14 Thread suganthi Sekar
Hi Team ,

 i am using Postgresql 11, i have 2 partition table , when i joined both table 
in query
a table  its goes exact partition table  , but other table scan all partition

please clarify on this .

i have enabled below parameter on in configuration file
Note  :  alter system set enable_partitionwise_join  to 'on';


Example :

explain analyze
select * from call_report1 as a  inner join  call_report2 as b on 
a.call_id=b.call_id
 where a.call_created_date ='2017-11-01' and '2017-11-30'



 "Hash Right Join  (cost=8.19..50.47 rows=2 width=3635) (actual 
time=0.426..0.447 rows=7 loops=1)"
"  Hash Cond: (b.call_id = a.call_id)"
"  ->  Append  (cost=0.00..41.81 rows=121 width=2319) (actual time=0.040..0.170 
rows=104 loops=1)"
"->  Seq Scan on call_report2 b  (cost=0.00..0.00 rows=1 width=528) 
(actual time=0.010..0.010 rows=0 loops=1)"
"->  Seq Scan on call_report2_201803 b_1  (cost=0.00..10.30 rows=30 
width=2334) (actual time=0.029..0.031 rows=14 loops=1)"
"->  Seq Scan on call_report2_201711 b_2  (cost=0.00..10.30 rows=30 
width=2334) (actual time=0.014..0.015 rows=7 loops=1)"
"->  Seq Scan on call_report2_201712 b_3  (cost=0.00..10.30 rows=30 
width=2334) (actual time=0.017..0.047 rows=34 loops=1)"
"->  Seq Scan on call_report2_201801 b_4  (cost=0.00..10.30 rows=30 
width=2334) (actual time=0.017..0.058 rows=49 loops=1)"
"  ->  Hash  (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 
rows=7 loops=1)"
"Buckets: 1024  Batches: 1  Memory Usage: 12kB"
"->  Append  (cost=0.00..8.17 rows=2 width=1314) (actual 
time=0.053..0.060 rows=7 loops=1)"
"  ->  Seq Scan on call_report1 a  (cost=0.00..0.00 rows=1 
width=437) (actual time=0.022..0.022 rows=0 loops=1)"
"Filter: ((call_created_date >= '2017-11-01'::date) AND 
(call_created_date <= '2017-11-30'::date))"
"  ->  Index Scan using idx_call_report1_201711_ccd on 
call_report1_201711 a_1  (cost=0.14..8.16 rows=1 width=2190) (actual 
time=0.029..0.034 rows=7 loops=1)"
"Index Cond: ((call_created_date >= '2017-11-01'::date) AND 
(call_created_date <= '2017-11-30'::date))"
"Planning Time: 20.866 ms"
"Execution Time: 1.205 ms"



From: suganthi Sekar
Sent: 04 February 2019 15:27:31
To: [email protected]
Subject: Fw: server hardware tuning.



Hi ,


I need know how to calculate hardware sizing for database or query


RAM

CPU

Config tuning


Requirement :


1100 concurrent connection

1600 column of table

1GB of data can be select and dynamic aggregation will happen


Regards

SuganthiSekar


Re: constraint exclusion with ineq condition (Re: server hardware tuning.)

2019-02-14 Thread Justin Pryzby
On Thu, Feb 14, 2019 at 09:38:52AM +, suganthi Sekar wrote:
>  i am using Postgresql 11, i have 2 partition table , when i joined both 
> table in query
> a table  its goes exact partition table  , but other table scan all partition
> 
> please clarify on this .
> 
> Example :
> 
> explain analyze
> select * from call_report1 as a  inner join  call_report2 as b on 
> a.call_id=b.call_id
>  where a.call_created_date ='2017-11-01' and '2017-11-30'

Looks like this query waas manally editted and should say:
>  where a.call_created_date >='2017-11-01' AND a.call_created_date<'2017-11-30'
Right?

The issue is described well here:
https://www.postgresql.org/message-id/flat/7DF51702-0F6A-4571-80BB-188AAEF260DA%40gmail.com
https://www.postgresql.org/message-id/499.1496696552%40sss.pgh.pa.us

You can work around it by specifying the same condition on b.call_created_date:
>  AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'

Justin



Re: constraint exclusion with ineq condition (Re: server hardware tuning.)

2019-02-14 Thread Justin Pryzby
On Thu, Feb 14, 2019 at 10:38:36AM +, suganthi Sekar wrote:
> u mean the below parameter need to set on . its already on only.
>   alter system set  constraint_exclusion  to 'on';

No, I said:
> You can work around it by specifying the same condition on 
> b.call_created_date:
> >  AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'



Re: constraint exclusion with ineq condition (Re: server hardware tuning.)

2019-02-14 Thread suganthi Sekar
HI,


u mean the below parameter need to set on . its already on only.


  alter system set  constraint_exclusion  to 'on';


Regards,

Suganthi Sekar


From: Justin Pryzby 
Sent: 14 February 2019 15:35:33
To: suganthi Sekar
Cc: [email protected]
Subject: Re: constraint exclusion with ineq condition (Re: server hardware 
tuning.)

On Thu, Feb 14, 2019 at 09:38:52AM +, suganthi Sekar wrote:
>  i am using Postgresql 11, i have 2 partition table , when i joined both 
> table in query
> a table  its goes exact partition table  , but other table scan all partition
>
> please clarify on this .
>
> Example :
>
> explain analyze
> select * from call_report1 as a  inner join  call_report2 as b on 
> a.call_id=b.call_id
>  where a.call_created_date ='2017-11-01' and '2017-11-30'

Looks like this query waas manally editted and should say:
>  where a.call_created_date >='2017-11-01' AND a.call_created_date<'2017-11-30'
Right?

The issue is described well here:
https://www.postgresql.org/message-id/flat/7DF51702-0F6A-4571-80BB-188AAEF260DA%40gmail.com
https://www.postgresql.org/message-id/499.1496696552%40sss.pgh.pa.us

You can work around it by specifying the same condition on b.call_created_date:
>  AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'

Justin


Re: constraint exclusion with ineq condition (Re: server hardware tuning.)

2019-02-14 Thread suganthi Sekar
Hi,

Thanks, i know if explicitly we give in where condition it is working.

i thought with below parameter in Postgresq11 this issue is fixed ?

 enable_partitionwise_join  to 'on';

 what is the use of enable_partitionwise_join  to 'on';

Thanks for your response.

Regards
Suganthi Sekar

From: Justin Pryzby 
Sent: 14 February 2019 16:10:01
To: suganthi Sekar
Cc: [email protected]
Subject: Re: constraint exclusion with ineq condition (Re: server hardware 
tuning.)

On Thu, Feb 14, 2019 at 10:38:36AM +, suganthi Sekar wrote:
> u mean the below parameter need to set on . its already on only.
>   alter system set  constraint_exclusion  to 'on';

No, I said:
> You can work around it by specifying the same condition on 
> b.call_created_date:
> >  AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'


Re: partition pruning

2019-02-14 Thread Laurenz Albe
suganthi Sekar wrote:
>  i am using Postgresql 11, i have 2 partition table , when i joined both 
> table in query 
> a table  its goes exact partition table  , but other table scan all partition
> 
> please clarify on this .
> 
> i have enabled below parameter on in configuration file
> Note  :  alter system set enable_partitionwise_join  to 'on';
> 
> 
> Example : 
> 
> explain analyze
> select * from call_report1 as a  inner join  call_report2 as b on 
> a.call_id=b.call_id
>  where a.call_created_date ='2017-11-01' and '2017-11-30'
> 
> 
> 
>  "Hash Right Join  (cost=8.19..50.47 rows=2 width=3635) (actual 
> time=0.426..0.447 rows=7 loops=1)"
> "  Hash Cond: (b.call_id = a.call_id)"
> "  ->  Append  (cost=0.00..41.81 rows=121 width=2319) (actual 
> time=0.040..0.170 rows=104 loops=1)"
> "->  Seq Scan on call_report2 b  (cost=0.00..0.00 rows=1 width=528) 
> (actual time=0.010..0.010 rows=0 loops=1)"
> "->  Seq Scan on call_report2_201803 b_1  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.029..0.031 rows=14 loops=1)"
> "->  Seq Scan on call_report2_201711 b_2  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.014..0.015 rows=7 loops=1)"
> "->  Seq Scan on call_report2_201712 b_3  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.017..0.047 rows=34 loops=1)"
> "->  Seq Scan on call_report2_201801 b_4  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.017..0.058 rows=49 loops=1)"
> "  ->  Hash  (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 
> rows=7 loops=1)"
> "Buckets: 1024  Batches: 1  Memory Usage: 12kB"
> "->  Append  (cost=0.00..8.17 rows=2 width=1314) (actual 
> time=0.053..0.060 rows=7 loops=1)"
> "  ->  Seq Scan on call_report1 a  (cost=0.00..0.00 rows=1 
> width=437) (actual time=0.022..0.022 rows=0 loops=1)"
> "Filter: ((call_created_date >= '2017-11-01'::date) AND 
> (call_created_date <= '2017-11-30'::date))"
> "  ->  Index Scan using idx_call_report1_201711_ccd on 
> call_report1_201711 a_1  (cost=0.14..8.16 rows=1 width=2190) (actual 
> time=0.029..0.034 rows=7 loops=1)"
> "Index Cond: ((call_created_date >= '2017-11-01'::date) 
> AND (call_created_date <= '2017-11-30'::date))"
> "Planning Time: 20.866 ms"
> "Execution Time: 1.205 ms"

There is no condition on the table "call_report2" in your query,
so it is not surprising that all partitions are scanned, right?

You have to add a WHERE condition that filters on the partitioning
column(s) of "call_report2".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: constraint exclusion with ineq condition (Re: server hardware tuning.)

2019-02-14 Thread Michael Lewis
What are these two tables partitioned by?

On Thu, Feb 14, 2019, 5:03 AM suganthi Sekar  Hi,
>
> Thanks, i know if explicitly we give in where condition it is working.
>
> i thought with below parameter in Postgresq11 this issue is fixed ?
>
> * enable_partitionwise_join  to 'on';*
>
> * what is the use of enable_partitionwise_join  to 'on';*
>
> *Thanks for your response.*
>
> *Regards*
> *Suganthi Sekar*
> --
> *From:* Justin Pryzby 
> *Sent:* 14 February 2019 16:10:01
> *To:* suganthi Sekar
> *Cc:* [email protected]
> *Subject:* Re: constraint exclusion with ineq condition (Re: server
> hardware tuning.)
>
> On Thu, Feb 14, 2019 at 10:38:36AM +, suganthi Sekar wrote:
> > u mean the below parameter need to set on . its already on only.
> >   alter system set  constraint_exclusion  to 'on';
>
> No, I said:
> > You can work around it by specifying the same condition on
> b.call_created_date:
> > >  AND b.call_created_date >='2017-11-01' AND
> b.call_created_date<'2017-11-30'
>


Re: partition pruning

2019-02-14 Thread Justin Pryzby
On Thu, Feb 14, 2019 at 01:37:49PM +0100, Laurenz Albe wrote:
> There is no condition on the table "call_report2" in your query,
> so it is not surprising that all partitions are scanned, right?

Some people find it surprising, since: a.call_id=b.call_id

suganthi Sekar wrote:
> > explain analyze
> > select * from call_report1 as a  inner join  call_report2 as b on 
> > a.call_id=b.call_id
> >  where a.call_created_date ='2017-11-01' and '2017-11-30'

Justin



Re: partition pruning

2019-02-14 Thread suganthi Sekar
HI ,


Ok thanks.


Regards,

Suganthi Sekar



From: Laurenz Albe 
Sent: 14 February 2019 18:07:49
To: suganthi Sekar; [email protected]
Subject: Re: partition pruning

suganthi Sekar wrote:
>  i am using Postgresql 11, i have 2 partition table , when i joined both 
> table in query
> a table  its goes exact partition table  , but other table scan all partition
>
> please clarify on this .
>
> i have enabled below parameter on in configuration file
> Note  :  alter system set enable_partitionwise_join  to 'on';
>
>
> Example :
>
> explain analyze
> select * from call_report1 as a  inner join  call_report2 as b on 
> a.call_id=b.call_id
>  where a.call_created_date ='2017-11-01' and '2017-11-30'
>
>
>
>  "Hash Right Join  (cost=8.19..50.47 rows=2 width=3635) (actual 
> time=0.426..0.447 rows=7 loops=1)"
> "  Hash Cond: (b.call_id = a.call_id)"
> "  ->  Append  (cost=0.00..41.81 rows=121 width=2319) (actual 
> time=0.040..0.170 rows=104 loops=1)"
> "->  Seq Scan on call_report2 b  (cost=0.00..0.00 rows=1 width=528) 
> (actual time=0.010..0.010 rows=0 loops=1)"
> "->  Seq Scan on call_report2_201803 b_1  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.029..0.031 rows=14 loops=1)"
> "->  Seq Scan on call_report2_201711 b_2  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.014..0.015 rows=7 loops=1)"
> "->  Seq Scan on call_report2_201712 b_3  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.017..0.047 rows=34 loops=1)"
> "->  Seq Scan on call_report2_201801 b_4  (cost=0.00..10.30 rows=30 
> width=2334) (actual time=0.017..0.058 rows=49 loops=1)"
> "  ->  Hash  (cost=8.17..8.17 rows=2 width=1314) (actual time=0.104..0.104 
> rows=7 loops=1)"
> "Buckets: 1024  Batches: 1  Memory Usage: 12kB"
> "->  Append  (cost=0.00..8.17 rows=2 width=1314) (actual 
> time=0.053..0.060 rows=7 loops=1)"
> "  ->  Seq Scan on call_report1 a  (cost=0.00..0.00 rows=1 
> width=437) (actual time=0.022..0.022 rows=0 loops=1)"
> "Filter: ((call_created_date >= '2017-11-01'::date) AND 
> (call_created_date <= '2017-11-30'::date))"
> "  ->  Index Scan using idx_call_report1_201711_ccd on 
> call_report1_201711 a_1  (cost=0.14..8.16 rows=1 width=2190) (actual 
> time=0.029..0.034 rows=7 loops=1)"
> "Index Cond: ((call_created_date >= '2017-11-01'::date) 
> AND (call_created_date <= '2017-11-30'::date))"
> "Planning Time: 20.866 ms"
> "Execution Time: 1.205 ms"

There is no condition on the table "call_report2" in your query,
so it is not surprising that all partitions are scanned, right?

You have to add a WHERE condition that filters on the partitioning
column(s) of "call_report2".

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com



Re: constraint exclusion with ineq condition (Re: server hardware tuning.)

2019-02-14 Thread suganthi Sekar
Both table Portion by  same column call_created_date

From: Michael Lewis 
Sent: 14 February 2019 19:35:48
To: suganthi Sekar
Cc: Justin Pryzby; [email protected]
Subject: Re: constraint exclusion with ineq condition (Re: server hardware 
tuning.)

What are these two tables partitioned by?

On Thu, Feb 14, 2019, 5:03 AM suganthi Sekar 
mailto:[email protected]> wrote:
Hi,

Thanks, i know if explicitly we give in where condition it is working.

i thought with below parameter in Postgresq11 this issue is fixed ?

 enable_partitionwise_join  to 'on';

 what is the use of enable_partitionwise_join  to 'on';

Thanks for your response.

Regards
Suganthi Sekar

From: Justin Pryzby mailto:[email protected]>>
Sent: 14 February 2019 16:10:01
To: suganthi Sekar
Cc: 
[email protected]
Subject: Re: constraint exclusion with ineq condition (Re: server hardware 
tuning.)

On Thu, Feb 14, 2019 at 10:38:36AM +, suganthi Sekar wrote:
> u mean the below parameter need to set on . its already on only.
>   alter system set  constraint_exclusion  to 'on';

No, I said:
> You can work around it by specifying the same condition on 
> b.call_created_date:
> >  AND b.call_created_date >='2017-11-01' AND b.call_created_date<'2017-11-30'



Re: autovacuum big table taking hours and sometimes seconds

2019-02-14 Thread Mariel Cherkassky
I checked in the logs when the autovacuum vacuum my big toasted table
during the week and I wanted to confirm with you what I think :
postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic vacuum
of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
postgresql-Fri.log- pages: 2253 removed, 13737828 remain
postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530
dirtied
postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s
--
postgresql-Mon.log:2019-02-11 01:11:46 EST  8426  LOG:  automatic vacuum of
table "myDB.pg_toast.pg_toast_1958391": index scans: 23
postgresql-Mon.log- pages: 0 removed, 23176876 remain
postgresql-Mon.log- tuples: 62269200 removed, 82958 remain
postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses, 38950869
dirtied
postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396 MiB/s
--
postgresql-Mon.log:2019-02-11 21:43:19 EST  24323  LOG:  automatic vacuum
of table "myDB.pg_toast.pg_toast_1958391": index scans: 1
postgresql-Mon.log- pages: 0 removed, 23176876 remain
postgresql-Mon.log- tuples: 114573 removed, 57785 remain
postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses, 15626466
dirtied
postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470 MiB/s
--
postgresql-Sat.log:2019-02-09 04:54:50 EST  1793  LOG:  automatic vacuum of
table "myDB.pg_toast.pg_toast_1958391": index scans: 13
postgresql-Sat.log- pages: 0 removed, 13737828 remain
postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain
postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses, 22473776
dirtied
postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410 MiB/s
--
postgresql-Thu.log:2019-02-07 12:08:50 EST  29630  LOG:  automatic vacuum
of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
postgresql-Thu.log- pages: 0 removed, 10290976 remain
postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain
postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses, 19232835
dirtied
postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437 MiB/s
--
postgresql-Tue.log:2019-02-12 20:54:44 EST  21464  LOG:  automatic vacuum
of table "myDB.pg_toast.pg_toast_1958391": index scans: 10
postgresql-Tue.log- pages: 0 removed, 23176876 remain
postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain
postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses, 25472137
dirtied
postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385 MiB/s
--


Lets focus for example on one of the outputs :
postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic vacuum
of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
postgresql-Fri.log- pages: 2253 removed, 13737828 remain
postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, *19274530
*dirtied
postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s

The cost_limit is set to 200 (default) and the cost_delay is set to 20ms.
The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000
= 61133.8197 seconds ~ 17H
So autovacuum was laying down for 17h ? I think that I should increase the
cost_limit to max specifically on the toasted table. What do you think ? Am
I wrong here ?


‫בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת ‪Jeff Janes‬‏ <‪
[email protected]‬‏>:‬

> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <
> [email protected]> wrote:
>
> I have 3 questions :
>> 1)To what value do you recommend to increase the vacuum cost_limit ? 2000
>> seems reasonable ? Or maybe its better to leave it as default and assign a
>> specific value for big tables ?
>>
>
> That depends on your IO hardware, and your workload.  You wouldn't want
> background vacuum to use so much of your available IO that it starves your
> other processes.
>
>
>
>> 2)When the autovacuum reaches the cost_limit while trying to vacuum a
>> specific table, it wait nap_time seconds and then it continue to work on
>> the same table ?
>>
>
> No, it waits for autovacuum_vacuum_cost_delay before resuming within the
> same table. During this delay, the table is still open and it still holds a
> lock on it, and holds the transaction open, etc.  Naptime is entirely
> different, it controls how often the vacuum scheduler checks to see which
> tables need to be vacuumed again.
>
>
>
>> 3)So in case I have a table that keeps growing (not fast because I set
>> the vacuum_scale_factor to 0 and the autovacuum_vacuum_threshold to 1).
>> If the table keep growing it means I should try to increase the cost right
>> ? Do you see any other option ?
>>
>
>  You can use pg_freespacemap to see if the free space is spread evenly
> throughout the table, or clustered together.  That might help figure out
> what is going on.  And, is it the table itself that is growing, or the
> index on it?

Re: autovacuum big table taking hours and sometimes seconds

2019-02-14 Thread Michael Lewis
It is curious to me that the tuples remaining count varies so wildly. Is
this expected?


*Michael Lewis*

On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
[email protected]> wrote:

> I checked in the logs when the autovacuum vacuum my big toasted table
> during the week and I wanted to confirm with you what I think :
> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic vacuum
> of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,
> 19274530 dirtied
> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
> MiB/s
> --
> postgresql-Mon.log:2019-02-11 01:11:46 EST  8426  LOG:  automatic vacuum
> of table "myDB.pg_toast.pg_toast_1958391": index scans: 23
> postgresql-Mon.log- pages: 0 removed, 23176876 remain
> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain
> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,
> 38950869 dirtied
> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396
> MiB/s
> --
> postgresql-Mon.log:2019-02-11 21:43:19 EST  24323  LOG:  automatic vacuum
> of table "myDB.pg_toast.pg_toast_1958391": index scans: 1
> postgresql-Mon.log- pages: 0 removed, 23176876 remain
> postgresql-Mon.log- tuples: 114573 removed, 57785 remain
> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,
> 15626466 dirtied
> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470
> MiB/s
> --
> postgresql-Sat.log:2019-02-09 04:54:50 EST  1793  LOG:  automatic vacuum
> of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
> postgresql-Sat.log- pages: 0 removed, 13737828 remain
> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain
> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,
> 22473776 dirtied
> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410
> MiB/s
> --
> postgresql-Thu.log:2019-02-07 12:08:50 EST  29630  LOG:  automatic vacuum
> of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
> postgresql-Thu.log- pages: 0 removed, 10290976 remain
> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain
> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,
> 19232835 dirtied
> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437
> MiB/s
> --
> postgresql-Tue.log:2019-02-12 20:54:44 EST  21464  LOG:  automatic vacuum
> of table "myDB.pg_toast.pg_toast_1958391": index scans: 10
> postgresql-Tue.log- pages: 0 removed, 23176876 remain
> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain
> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,
> 25472137 dirtied
> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385
> MiB/s
> --
>
>
> Lets focus for example on one of the outputs :
> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic vacuum
> of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, 
> *19274530
> *dirtied
> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
> MiB/s
>
> The cost_limit is set to 200 (default) and the cost_delay is set to 20ms.
> The calculation I did : (1**15031267*+10**21081633*+20**19274530)*/200*20/1000
> = 61133.8197 seconds ~ 17H
> So autovacuum was laying down for 17h ? I think that I should increase the
> cost_limit to max specifically on the toasted table. What do you think ? Am
> I wrong here ?
>
>
> ‫בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת ‪Jeff Janes‬‏ <‪
> [email protected]‬‏>:‬
>
>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <
>> [email protected]> wrote:
>>
>> I have 3 questions :
>>> 1)To what value do you recommend to increase the vacuum cost_limit ?
>>> 2000 seems reasonable ? Or maybe its better to leave it as default and
>>> assign a specific value for big tables ?
>>>
>>
>> That depends on your IO hardware, and your workload.  You wouldn't want
>> background vacuum to use so much of your available IO that it starves your
>> other processes.
>>
>>
>>
>>> 2)When the autovacuum reaches the cost_limit while trying to vacuum a
>>> specific table, it wait nap_time seconds and then it continue to work on
>>> the same table ?
>>>
>>
>> No, it waits for autovacuum_vacuum_cost_delay before resuming within the
>> same table. During this delay, the table is still open and it still holds a
>> lock on it, and holds the transaction open, etc.  Naptime is entirely
>> different, it controls how often the vacuum scheduler checks to see which
>> tables need to be vacuumed again.
>>
>>
>>
>>> 3)So in case I have a table that keeps growing (not fast because I set
>>> the vacuum_scale_factor to 0 and the autovacuum_

Re: constraint exclusion with ineq condition (Re: server hardware tuning.)

2019-02-14 Thread Michael Lewis
Yeah, the planner doesn't know that call_created_date can be limited on
both tables unless you tell it specify it in the where condition as Laurenz
said on another thread.


*Michael Lewis*

On Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar 
wrote:

> Both table Portion by  same column call_created_date
> 
> From: Michael Lewis 
> Sent: 14 February 2019 19:35:48
> To: suganthi Sekar
> Cc: Justin Pryzby; [email protected]
> Subject: Re: constraint exclusion with ineq condition (Re: server hardware
> tuning.)
>
> What are these two tables partitioned by?
>
> On Thu, Feb 14, 2019, 5:03 AM suganthi Sekar   wrote:
> Hi,
>
> Thanks, i know if explicitly we give in where condition it is working.
>
> i thought with below parameter in Postgresq11 this issue is fixed ?
>
>  enable_partitionwise_join  to 'on';
>
>  what is the use of enable_partitionwise_join  to 'on';
>
> Thanks for your response.
>
> Regards
> Suganthi Sekar
> 
> From: Justin Pryzby mailto:[email protected]>>
> Sent: 14 February 2019 16:10:01
> To: suganthi Sekar
> Cc: [email protected] [email protected]>
> Subject: Re: constraint exclusion with ineq condition (Re: server hardware
> tuning.)
>
> On Thu, Feb 14, 2019 at 10:38:36AM +, suganthi Sekar wrote:
> > u mean the below parameter need to set on . its already on only.
> >   alter system set  constraint_exclusion  to 'on';
>
> No, I said:
> > You can work around it by specifying the same condition on
> b.call_created_date:
> > >  AND b.call_created_date >='2017-11-01' AND
> b.call_created_date<'2017-11-30'
>


Re: autovacuum big table taking hours and sometimes seconds

2019-02-14 Thread Mariel Cherkassky
Maybe by explaining the tables purpose it will be cleaner. The original
table contains rows for sessions in my app. Every session saves for itself
some raw data which is saved in the toasted table. We clean old sessions
(3+ days) every night. During the day sessions are created so the size of
the table should grow during the day and freed in the night after the
autovacuum run.However, the autovacuums sleeps for alot of time and during
that time more sessions are created so maybe this can explain the big size
? Do you think that by increasing the cost limit and decreasing the cost
delay I can solve the issue ?

On Thu, Feb 14, 2019, 8:38 PM Michael Lewis  It is curious to me that the tuples remaining count varies so wildly. Is
> this expected?
>
>
> *Michael Lewis*
>
> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
> [email protected]> wrote:
>
>> I checked in the logs when the autovacuum vacuum my big toasted table
>> during the week and I wanted to confirm with you what I think :
>> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic vacuum
>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,
>> 19274530 dirtied
>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
>> MiB/s
>> --
>> postgresql-Mon.log:2019-02-11 01:11:46 EST  8426  LOG:  automatic vacuum
>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 23
>> postgresql-Mon.log- pages: 0 removed, 23176876 remain
>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain
>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,
>> 38950869 dirtied
>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396
>> MiB/s
>> --
>> postgresql-Mon.log:2019-02-11 21:43:19 EST  24323  LOG:  automatic vacuum
>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 1
>> postgresql-Mon.log- pages: 0 removed, 23176876 remain
>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain
>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,
>> 15626466 dirtied
>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470
>> MiB/s
>> --
>> postgresql-Sat.log:2019-02-09 04:54:50 EST  1793  LOG:  automatic vacuum
>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
>> postgresql-Sat.log- pages: 0 removed, 13737828 remain
>> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain
>> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,
>> 22473776 dirtied
>> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410
>> MiB/s
>> --
>> postgresql-Thu.log:2019-02-07 12:08:50 EST  29630  LOG:  automatic vacuum
>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
>> postgresql-Thu.log- pages: 0 removed, 10290976 remain
>> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain
>> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,
>> 19232835 dirtied
>> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437
>> MiB/s
>> --
>> postgresql-Tue.log:2019-02-12 20:54:44 EST  21464  LOG:  automatic vacuum
>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 10
>> postgresql-Tue.log- pages: 0 removed, 23176876 remain
>> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain
>> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,
>> 25472137 dirtied
>> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385
>> MiB/s
>> --
>>
>>
>> Lets focus for example on one of the outputs :
>> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic vacuum
>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
>> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, 
>> *19274530
>> *dirtied
>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
>> MiB/s
>>
>> The cost_limit is set to 200 (default) and the cost_delay is set to 20ms.
>> The calculation I did : 
>> (1**15031267*+10**21081633*+20**19274530)*/200*20/1000
>> = 61133.8197 seconds ~ 17H
>> So autovacuum was laying down for 17h ? I think that I should increase
>> the cost_limit to max specifically on the toasted table. What do you think
>> ? Am I wrong here ?
>>
>>
>> ‫בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת ‪Jeff Janes‬‏ <‪
>> [email protected]‬‏>:‬
>>
>>> On Thu, Feb 7, 2019 at 6:55 AM Mariel Cherkassky <
>>> [email protected]> wrote:
>>>
>>> I have 3 questions :
 1)To what value do you recommend to increase the vacuum cost_limit ?
 2000 seems reasonable ? Or maybe its better to leave it as default and
 assign a specific value for big tables ?

>>>
>>> That depends on your IO hardware, and your workload.  You wouldn't 

Re: autovacuum big table taking hours and sometimes seconds

2019-02-14 Thread Michael Lewis
Thanks, that context is very enlightening. Do you manually vacuum after
doing the big purge of old session data? Is bloat causing issues for you?
Why is it a concern that autovacuum's behavior varies?


*Michael Lewis*

On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
[email protected]> wrote:

> Maybe by explaining the tables purpose it will be cleaner. The original
> table contains rows for sessions in my app. Every session saves for itself
> some raw data which is saved in the toasted table. We clean old sessions
> (3+ days) every night. During the day sessions are created so the size of
> the table should grow during the day and freed in the night after the
> autovacuum run.However, the autovacuums sleeps for alot of time and during
> that time more sessions are created so maybe this can explain the big size
> ? Do you think that by increasing the cost limit and decreasing the cost
> delay I can solve the issue ?
>
> On Thu, Feb 14, 2019, 8:38 PM Michael Lewis 
>> It is curious to me that the tuples remaining count varies so wildly. Is
>> this expected?
>>
>>
>> *Michael Lewis*
>>
>> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
>> [email protected]> wrote:
>>
>>> I checked in the logs when the autovacuum vacuum my big toasted table
>>> during the week and I wanted to confirm with you what I think :
>>> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic
>>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
>>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,
>>> 19274530 dirtied
>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
>>> MiB/s
>>> --
>>> postgresql-Mon.log:2019-02-11 01:11:46 EST  8426  LOG:  automatic vacuum
>>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 23
>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain
>>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain
>>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,
>>> 38950869 dirtied
>>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396
>>> MiB/s
>>> --
>>> postgresql-Mon.log:2019-02-11 21:43:19 EST  24323  LOG:  automatic
>>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 1
>>> postgresql-Mon.log- pages: 0 removed, 23176876 remain
>>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain
>>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,
>>> 15626466 dirtied
>>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470
>>> MiB/s
>>> --
>>> postgresql-Sat.log:2019-02-09 04:54:50 EST  1793  LOG:  automatic vacuum
>>> of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
>>> postgresql-Sat.log- pages: 0 removed, 13737828 remain
>>> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain
>>> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,
>>> 22473776 dirtied
>>> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410
>>> MiB/s
>>> --
>>> postgresql-Thu.log:2019-02-07 12:08:50 EST  29630  LOG:  automatic
>>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
>>> postgresql-Thu.log- pages: 0 removed, 10290976 remain
>>> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain
>>> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,
>>> 19232835 dirtied
>>> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437
>>> MiB/s
>>> --
>>> postgresql-Tue.log:2019-02-12 20:54:44 EST  21464  LOG:  automatic
>>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 10
>>> postgresql-Tue.log- pages: 0 removed, 23176876 remain
>>> postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain
>>> postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,
>>> 25472137 dirtied
>>> postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385
>>> MiB/s
>>> --
>>>
>>>
>>> Lets focus for example on one of the outputs :
>>> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic
>>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
>>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
>>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
>>> postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, 
>>> *19274530
>>> *dirtied
>>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
>>> MiB/s
>>>
>>> The cost_limit is set to 200 (default) and the cost_delay is set to
>>> 20ms.
>>> The calculation I did : 
>>> (1**15031267*+10**21081633*+20**19274530)*/200*20/1000
>>> = 61133.8197 seconds ~ 17H
>>> So autovacuum was laying down for 17h ? I think that I should increase
>>> the cost_limit to max specifically on the toasted table. What do you think
>>> ? Am I wrong here ?
>>>
>>>
>>> ‫בתאריך יום ה׳, 7 בפבר׳ 2019 ב-18:26 מאת ‪Jeff Janes‬‏ <‪
>>> jef

Re: autovacuum big table taking hours and sometimes seconds

2019-02-14 Thread Mariel Cherkassky
No I don't run vacuum manually afterwards because the autovacuum should
run. This process happens every night. Yes , bloating is an issue because
the table grow and take a lot of space on disk.  Regarding the autovacuum,
I think that it sleeps too much time (17h) during it's work, don't you
think so?

On Thu, Feb 14, 2019, 9:52 PM Michael Lewis  Thanks, that context is very enlightening. Do you manually vacuum after
> doing the big purge of old session data? Is bloat causing issues for you?
> Why is it a concern that autovacuum's behavior varies?
>
>
> *Michael Lewis*
>
> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
> [email protected]> wrote:
>
>> Maybe by explaining the tables purpose it will be cleaner. The original
>> table contains rows for sessions in my app. Every session saves for itself
>> some raw data which is saved in the toasted table. We clean old sessions
>> (3+ days) every night. During the day sessions are created so the size of
>> the table should grow during the day and freed in the night after the
>> autovacuum run.However, the autovacuums sleeps for alot of time and during
>> that time more sessions are created so maybe this can explain the big size
>> ? Do you think that by increasing the cost limit and decreasing the cost
>> delay I can solve the issue ?
>>
>> On Thu, Feb 14, 2019, 8:38 PM Michael Lewis >
>>> It is curious to me that the tuples remaining count varies so wildly. Is
>>> this expected?
>>>
>>>
>>> *Michael Lewis*
>>>
>>> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
>>> [email protected]> wrote:
>>>
 I checked in the logs when the autovacuum vacuum my big toasted table
 during the week and I wanted to confirm with you what I think :
 postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic
 vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
 postgresql-Fri.log- pages: 2253 removed, 13737828 remain
 postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
 postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,
 19274530 dirtied
 postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
 MiB/s
 --
 postgresql-Mon.log:2019-02-11 01:11:46 EST  8426  LOG:  automatic
 vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 23
 postgresql-Mon.log- pages: 0 removed, 23176876 remain
 postgresql-Mon.log- tuples: 62269200 removed, 82958 remain
 postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,
 38950869 dirtied
 postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396
 MiB/s
 --
 postgresql-Mon.log:2019-02-11 21:43:19 EST  24323  LOG:  automatic
 vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 1
 postgresql-Mon.log- pages: 0 removed, 23176876 remain
 postgresql-Mon.log- tuples: 114573 removed, 57785 remain
 postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,
 15626466 dirtied
 postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470
 MiB/s
 --
 postgresql-Sat.log:2019-02-09 04:54:50 EST  1793  LOG:  automatic
 vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
 postgresql-Sat.log- pages: 0 removed, 13737828 remain
 postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain
 postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,
 22473776 dirtied
 postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410
 MiB/s
 --
 postgresql-Thu.log:2019-02-07 12:08:50 EST  29630  LOG:  automatic
 vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
 postgresql-Thu.log- pages: 0 removed, 10290976 remain
 postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain
 postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,
 19232835 dirtied
 postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg write rate: 2.437
 MiB/s
 --
 postgresql-Tue.log:2019-02-12 20:54:44 EST  21464  LOG:  automatic
 vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 10
 postgresql-Tue.log- pages: 0 removed, 23176876 remain
 postgresql-Tue.log- tuples: 26011446 removed, 49426774 remain
 postgresql-Tue.log- buffer usage: 21863057 hits, 28668178 misses,
 25472137 dirtied
 postgresql-Tue.log- avg read rate: 2.684 MiB/s, avg write rate: 2.385
 MiB/s
 --


 Lets focus for example on one of the outputs :
 postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic
 vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
 postgresql-Fri.log- pages: 2253 removed, 13737828 remain
 postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
 postgresql-Fri.log- buffer usage: *15031267* hits, *21081633 *misses, 
 *19274530
 *dirtied
 postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
 MiB/s


Re: Q on SQL Performance tuning

2019-02-14 Thread Greg Stark
On Sun, 27 Jan 2019 at 06:29, legrand legrand
 wrote:
>
> Hi,
>
> There are many tools:
> - (core) extension pg_stat_statements will give you informations of SQL
> executions,

I've had enormous success using pg_stat_statements and gathering the
data over time in Prometheus. That let me build a dashboard in Grafana
that can dive into specific queries and see when their executions rate
suddenly spiked or the resource usage for the query suddenly changed.

> - extension pg_stat_sql_plans (alpha) gives all of pg_stat_statements and
much more

Extending pg_stat_statements to track statistics per-plan would be a
huge advance. And being able to link the metrics with data dumped in
the log from things like log_min_duration and pg_auto_explain would
make them both more useful.

-- 
greg



Re: JIT overhead slowdown

2019-02-14 Thread Justin Pryzby
Hi,

On Fri, Jan 18, 2019 at 02:12:23PM +, Luis Carril wrote:
> we noticed that in the presence of a schema with many partitions the 
> jitting overhead penalizes the total query execution time so much that the 
> planner should have decided not to jit at all. For example without jitting we 
> go a 8.3 s execution time and with jitting enabled 13.8 s.
...
> Is this behavior expected? Is the cost function for jitting missing some 
> circumstances?

On Fri, Jan 18, 2019 at 08:42:54AM -0800, Andres Freund wrote:
> The costing doesn't take the effect of overhead of repeated JITing in
> each worker into account.  I could give you a test patch that does, if
> you want to play around with it?

On Fri, Jan 18, 2019 at 06:02:43PM +, Luis Carril wrote:
>  yes please it would be much apreciated.

I'm also interested to try that ; on re-enabling JIT in 11.2, I see that JITed
queries seem to be universally slower than non-JIT.

I found that was discussed here:
https://www.postgresql.org/message-id/20180822161241.je6nghzjsktbb57b%40alap3.anarazel.de
https://www.postgresql.org/message-id/20180624203633.uxirvmigzdhcyjsd%40alap3.anarazel.de

Multiplying JIT cost by nworkers seems like an obvious thing to try, but I
wondered whether it's really correct?  Certainly repeated JITing takes N times
more CPU time, but doesn't make the query slower...unless the CPU resources are
starved and limiting ?

Justin



Re: autovacuum big table taking hours and sometimes seconds

2019-02-14 Thread Michael Lewis
If there are high number of updates during normal daytime processes, then
yes you need to ensure autovacuum is handling this table as needed. If the
nightly delete is the only major source of bloat on this table, then
perhaps running a manual vacuum keeps things tidy after the big delete.
Granted, if you are manually going to vacuum, don't use vacuum full as
there is not much sense in recovering that disk space if the table is going
to expected to be similarly sized again by the end of the day.

Do you have a proper number of workers and maintenance_work_mem to get the
job done?

As you proposed, it seems likely to be good to significantly increase
autovacuum_vacuum_cost_limit on this table, and perhaps decrease
autovacuum_vacuum_scale_factor if it is not being picked up as a candidate
for vacuum very frequently.



*Michael Lewis *


On Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <
[email protected]> wrote:

> No I don't run vacuum manually afterwards because the autovacuum should
> run. This process happens every night. Yes , bloating is an issue because
> the table grow and take a lot of space on disk.  Regarding the autovacuum,
> I think that it sleeps too much time (17h) during it's work, don't you
> think so?
>
> On Thu, Feb 14, 2019, 9:52 PM Michael Lewis 
>> Thanks, that context is very enlightening. Do you manually vacuum after
>> doing the big purge of old session data? Is bloat causing issues for you?
>> Why is it a concern that autovacuum's behavior varies?
>>
>>
>> *Michael Lewis*
>>
>> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
>> [email protected]> wrote:
>>
>>> Maybe by explaining the tables purpose it will be cleaner. The original
>>> table contains rows for sessions in my app. Every session saves for itself
>>> some raw data which is saved in the toasted table. We clean old sessions
>>> (3+ days) every night. During the day sessions are created so the size of
>>> the table should grow during the day and freed in the night after the
>>> autovacuum run.However, the autovacuums sleeps for alot of time and during
>>> that time more sessions are created so maybe this can explain the big size
>>> ? Do you think that by increasing the cost limit and decreasing the cost
>>> delay I can solve the issue ?
>>>
>>> On Thu, Feb 14, 2019, 8:38 PM Michael Lewis >>
 It is curious to me that the tuples remaining count varies so wildly.
 Is this expected?


 *Michael Lewis*

 On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
 [email protected]> wrote:

> I checked in the logs when the autovacuum vacuum my big toasted table
> during the week and I wanted to confirm with you what I think :
> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic
> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,
> 19274530 dirtied
> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469
> MiB/s
> --
> postgresql-Mon.log:2019-02-11 01:11:46 EST  8426  LOG:  automatic
> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 23
> postgresql-Mon.log- pages: 0 removed, 23176876 remain
> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain
> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,
> 38950869 dirtied
> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate: 2.396
> MiB/s
> --
> postgresql-Mon.log:2019-02-11 21:43:19 EST  24323  LOG:  automatic
> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 1
> postgresql-Mon.log- pages: 0 removed, 23176876 remain
> postgresql-Mon.log- tuples: 114573 removed, 57785 remain
> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,
> 15626466 dirtied
> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate: 2.470
> MiB/s
> --
> postgresql-Sat.log:2019-02-09 04:54:50 EST  1793  LOG:  automatic
> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
> postgresql-Sat.log- pages: 0 removed, 13737828 remain
> postgresql-Sat.log- tuples: 34457593 removed, 15871942 remain
> postgresql-Sat.log- buffer usage: 15552642 hits, 26130334 misses,
> 22473776 dirtied
> postgresql-Sat.log- avg read rate: 2.802 MiB/s, avg write rate: 2.410
> MiB/s
> --
> postgresql-Thu.log:2019-02-07 12:08:50 EST  29630  LOG:  automatic
> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
> postgresql-Thu.log- pages: 0 removed, 10290976 remain
> postgresql-Thu.log- tuples: 35357057 removed, 3436237 remain
> postgresql-Thu.log- buffer usage: 11854053 hits, 21346342 misses,
> 19232835 dirtied
> postgresql-Thu.log- avg read rate: 2.705 MiB/s, avg

Re: autovacuum big table taking hours and sometimes seconds

2019-02-14 Thread Mariel Cherkassky
I set the toast.autovacuum_vacuum_scale_factor to 0 and the
toast.autovacuum_vacuum threshold to 1 so it should be enough to force
a vacuum after the nightly deletes. Now , I changed the cost limit and the
cost delay, my question is if I have anything else to do ? My
maintenance_work_mem is about 1gb and I didn't change the default value of
the workers. Is there a way to calc what size the maintenance_work_mem
should be in order to clean the table ? And what exactly is saved in the
maintenance_work_mem ? I mean how it used by the autovacuum..

On Thu, Feb 14, 2019, 11:45 PM Michael Lewis  If there are high number of updates during normal daytime processes, then
> yes you need to ensure autovacuum is handling this table as needed. If the
> nightly delete is the only major source of bloat on this table, then
> perhaps running a manual vacuum keeps things tidy after the big delete.
> Granted, if you are manually going to vacuum, don't use vacuum full as
> there is not much sense in recovering that disk space if the table is going
> to expected to be similarly sized again by the end of the day.
>
> Do you have a proper number of workers and maintenance_work_mem to get the
> job done?
>
> As you proposed, it seems likely to be good to significantly increase
> autovacuum_vacuum_cost_limit on this table, and perhaps decrease
> autovacuum_vacuum_scale_factor if it is not being picked up as a candidate
> for vacuum very frequently.
>
>
>
> *Michael Lewis *
>
>
> On Thu, Feb 14, 2019 at 1:08 PM Mariel Cherkassky <
> [email protected]> wrote:
>
>> No I don't run vacuum manually afterwards because the autovacuum should
>> run. This process happens every night. Yes , bloating is an issue because
>> the table grow and take a lot of space on disk.  Regarding the autovacuum,
>> I think that it sleeps too much time (17h) during it's work, don't you
>> think so?
>>
>> On Thu, Feb 14, 2019, 9:52 PM Michael Lewis >
>>> Thanks, that context is very enlightening. Do you manually vacuum after
>>> doing the big purge of old session data? Is bloat causing issues for you?
>>> Why is it a concern that autovacuum's behavior varies?
>>>
>>>
>>> *Michael Lewis*
>>>
>>> On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
>>> [email protected]> wrote:
>>>
 Maybe by explaining the tables purpose it will be cleaner. The original
 table contains rows for sessions in my app. Every session saves for itself
 some raw data which is saved in the toasted table. We clean old sessions
 (3+ days) every night. During the day sessions are created so the size of
 the table should grow during the day and freed in the night after the
 autovacuum run.However, the autovacuums sleeps for alot of time and during
 that time more sessions are created so maybe this can explain the big size
 ? Do you think that by increasing the cost limit and decreasing the cost
 delay I can solve the issue ?

 On Thu, Feb 14, 2019, 8:38 PM Michael Lewis >>>
> It is curious to me that the tuples remaining count varies so wildly.
> Is this expected?
>
>
> *Michael Lewis*
>
> On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
> [email protected]> wrote:
>
>> I checked in the logs when the autovacuum vacuum my big toasted table
>> during the week and I wanted to confirm with you what I think :
>> postgresql-Fri.log:2019-02-08 05:05:53 EST  24776  LOG:  automatic
>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
>> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
>> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
>> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses,
>> 19274530 dirtied
>> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate:
>> 2.469 MiB/s
>> --
>> postgresql-Mon.log:2019-02-11 01:11:46 EST  8426  LOG:  automatic
>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 23
>> postgresql-Mon.log- pages: 0 removed, 23176876 remain
>> postgresql-Mon.log- tuples: 62269200 removed, 82958 remain
>> postgresql-Mon.log- buffer usage: 28290538 hits, 46323736 misses,
>> 38950869 dirtied
>> postgresql-Mon.log- avg read rate: 2.850 MiB/s, avg write rate:
>> 2.396 MiB/s
>> --
>> postgresql-Mon.log:2019-02-11 21:43:19 EST  24323  LOG:  automatic
>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 1
>> postgresql-Mon.log- pages: 0 removed, 23176876 remain
>> postgresql-Mon.log- tuples: 114573 removed, 57785 remain
>> postgresql-Mon.log- buffer usage: 15877931 hits, 15972119 misses,
>> 15626466 dirtied
>> postgresql-Mon.log- avg read rate: 2.525 MiB/s, avg write rate:
>> 2.470 MiB/s
>> --
>> postgresql-Sat.log:2019-02-09 04:54:50 EST  1793  LOG:  automatic
>> vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 13
>> postgresql-Sat.log- pages: 0