Partitioning Optimizer Questions and Issues

2019-02-08 Thread keith anderson
I am using Postgres on a large system (recording approximately 20million 
transactions per day). We use partitioning by date to assist with both vacuum 
processing time and to archive old data. At the core of the system are records 
in 2 different tables detailing different types of activity for monetary 
transactions (e.g. money in and money out) -> a single transaction has entries 
in both tables, so to retrieve all details for a single transaction we need to 
join the 2 tables.

The use of partitioning however has a significant impact on the performance of 
retrieving this data. Being relatively new to Postgres I wanted to share my 
findings and understand how others address them. We run postgres version 9.6 on 
CentOS, but the same behaviour is apparent in postgres 10.6. The test case 
outputs are from version 10.6 running on my Ubuntu machine with default 
postgres configuration.
Below is an example script to populate the test data:
===drop table if exists tablea cascade;drop table if exists 
tableb cascade;
CREATE TABLE tablea (    id            serial,    reference     int not null,   
 created       date not null)  PARTITION BY RANGE (created);

CREATE TABLE tablea_part1 PARTITION OF tablea    FOR VALUES FROM ('2018-01-01') 
TO ('2018-01-02');CREATE TABLE tablea_part2 PARTITION OF tablea    FOR VALUES 
FROM ('2018-01-02') TO ('2018-01-03');CREATE TABLE tablea_part3 PARTITION OF 
tablea    FOR VALUES FROM ('2018-01-03') TO ('2018-01-04');CREATE TABLE 
tablea_part4 PARTITION OF tablea    FOR VALUES FROM ('2018-01-04') TO 
('2018-01-05');CREATE TABLE tablea_part5 PARTITION OF tablea    FOR VALUES FROM 
('2018-01-05') TO ('2018-01-06');
CREATE INDEX tablea_id_1 ON tablea_part1 (id);CREATE INDEX tablea_id_2 ON 
tablea_part2 (id);CREATE INDEX tablea_id_3 ON tablea_part3 (id);CREATE INDEX 
tablea_id_4 ON tablea_part4 (id);CREATE INDEX tablea_id_5 ON tablea_part5 
(id);CREATE INDEX tablea_reference_1 ON tablea_part1 (reference);CREATE INDEX 
tablea_reference_2 ON tablea_part2 (reference);CREATE INDEX tablea_reference_3 
ON tablea_part3 (reference);CREATE INDEX tablea_reference_4 ON tablea_part4 
(reference);CREATE INDEX tablea_reference_5 ON tablea_part5 (reference);CREATE 
INDEX tablea_created_1 ON tablea_part1 (created);CREATE INDEX tablea_created_2 
ON tablea_part2 (created);CREATE INDEX tablea_created_3 ON tablea_part3 
(created);CREATE INDEX tablea_created_4 ON tablea_part4 (created);CREATE INDEX 
tablea_created_5 ON tablea_part5 (created);alter table tablea_part1 add CHECK ( 
created >= DATE '2018-01-01' AND created < DATE '2018-01-02');alter table 
tablea_part2 add CHECK ( created >= DATE '2018-01-02' AND created < DATE 
'2018-01-03');alter table tablea_part3 add CHECK ( created >= DATE '2018-01-03' 
AND created < DATE '2018-01-04');alter table tablea_part4 add CHECK ( created 
>= DATE '2018-01-04' AND created < DATE '2018-01-05');alter table tablea_part5 
add CHECK ( created >= DATE '2018-01-05' AND created < DATE '2018-01-06');
create or replace function populate_tablea() RETURNS integer AS$BODY$           
                                                  DECLARE                       
                 i integer;                                     v_created 
date;BEGIN                                               i := 0;                
                           WHILE (i < 5)    loop        i := i + 1;         
                                       IF (mod(i,5) = 1) THEN            
v_created = '2018-01-01';        ELSIF (mod(i,5) = 2) THEN            v_created 
= '2018-01-02';         ELSIF (mod(i,5) = 3) THEN            v_created = 
'2018-01-03';        ELSIF (mod(i,5) = 4) THEN            v_created = 
'2018-01-04';        ELSIF (mod(i,5) = 0) THEN            v_created = 
'2018-01-05';        END IF;                                    insert into 
tablea values (i, i, v_created);
    end loop;                                                                   
                               RETURN i;END;$BODY$LANGUAGE plpgsql VOLATILE    
COST 100;
CREATE TABLE tableb(    id         serial,    reference  int not null,    
created    date not null) PARTITION BY RANGE (created);

CREATE TABLE tableb_part1 PARTITION OF tableb    FOR VALUES FROM ('2018-01-01') 
TO ('2018-01-02');CREATE TABLE tableb_part2 PARTITION OF tableb    FOR VALUES 
FROM ('2018-01-02') TO ('2018-01-03');CREATE TABLE tableb_part3 PARTITION OF 
tableb    FOR VALUES FROM ('2018-01-03') TO ('2018-01-04');CREATE TABLE 
tableb_part4 PARTITION OF tableb    FOR VALUES FROM ('2018-01-04') TO 
('2018-01-05');CREATE TABLE tableb_part5 PARTITION OF tableb    FOR VALUES FROM 
('2018-01-05') TO ('2018-01-06');

CREATE INDEX tableb_id_1 ON tableb_part1 (id);CREATE INDEX tableb_id_2 ON 
tableb_part2 (id);CREATE INDEX tableb_id_3 ON tableb_part3 (id);CREATE INDEX 
tableb_id_4 ON tableb_part4 (id);CREATE INDEX tableb_id_5 ON tableb_part5 
(id);CREATE INDEX tableb_reference_1 ON tableb_part1 (reference);CREATE INDEX 

Re: Partitioning Optimizer Questions and Issues

2019-02-11 Thread keith anderson
 Thanks for the feedback Justin.

You are right, the most-common-values list is empty for my test case and so it 
is using n_distinct for the 'id IN()' scenario.And I can see that with the 
pg_class.reltuples and the pg_stats.histogram_bounds values how the optimizer 
can conclude with my range query that only 1 in 5 entries in my range query are 
in each individual partition.
However, I can also see that the pg_stats.n_distinct value for tablea shows -1, 
as do all the individual child partitions. In my opinion it makes sense for the 
optimizer when using n_distinct on partitioned tables to use the n_distinct 
value of the parent table level when estimating row counts rather than a sum of 
the partition level statistics. Or can someone come up with a good reason to 
avoid this? 
A couple of examples of different data in a partitioned table:
- Unique identifier -> if providing a single value in a query -> using 
n_distinct from parent will estimate 1, using child tables will be 1 * (number 
of partitions). Use of parent table would be correct.- Date of activity, with  
1000 records per day -> if providing a single day to the query -> using 
n_distinct from parent would show 1000 rows returned, using child tables will 
be 1000 * (number of partitions). Use of parent table n_distinct is correct.
Perhaps when querying on columns that are part of the partition logic you could 
use the partition level stats, but I think the vast majority of the time, using 
the parent statistics would be much more reliable/accurate than summing across 
partitions.
In terms of the partition strategy, I agree that it should be done with a view 
to helping performance improve. I will look into more detail at your 
suggestions, but in general it is very hard to use effectively as there are 
competing priorities:
- I would like to not have to manage massive numbers of partitions- I would 
like to be able to archive data easily using date (a big plus point to the 
existing date partitioning strategy)- It is hard in most cases to come up with 
a partition strategy that allows for partition elimination e.g. consider a 
common 'transaction record' table with a primary key, an account identifier, 
and a date -> it is natural to want to be able to query on any one of these, 
but as things stand it cannot be achieved performantly with partitioning.
Global index support feels like it has potential to resolve many of the issues 
I have with partitioning (beyond the optimizer concern above). I assume this 
has been discussed and rejected though by the community?
I've attached as a file the original test script.
Keith


On Friday, 8 February 2019, 13:05:04 GMT, Justin Pryzby 
 wrote:  
 
 On Fri, Feb 08, 2019 at 11:13:51AM +, keith anderson wrote:
> So to summarise the findings/questions from above:
> - It seems like the Postgres optimizer sometimes uses the partition level 
> statistics, and sometimes the global table level statistics? Or is it using 
> something else?- With partitioning tables with unique identifier and 
> retrieving explicitly on those identifiers, at present the optimizer will 
> always understimate the selectivity and overestimate the rows returned. This 
> inaccuracy increases in proportion to the number of partitions.- As a result, 
> when joining to other tables, you are liable to hitting sequential scans. 
> This becomes more likely as you have more partitions or if join to more 
> partitioned tables (note I am aware I could try and tune random_page_cost to 
> try and prevent this).- To me in the examples queries described above, it 
> makes sense to use the partition statistics for the partition level access 
> strategy, but the global statistics when estimating the actual rows returned 
> by all the individual partition queries. Is there a reason not to do this? Or 
> do others believe the optimizer is doing the right thing here?
> And then some general questions:
> - How do other people use partitioning but without a significant performance 
> disadvantage on reading the data? Is there something else I should be doing 
> here to achieve the same thing without the overhead? At present my reads have 
> increased optimization cost (as it needs to optimize access to each 
> partition) and also execution cost (access the index on every partition). 
> Even without the optimizer issues described above, the cost of reading simple 
> data is extremely high relative to non-partitioned data (unless you use the 
> partition key as a filter for each table to eliminate those partitions).- Is 
> there any chance/plan to add global indexes to postgres? If so would that 
> impact significantly the cost of the partition drop e.g. to clean up the 
> index.
> Thanks in advance for any feedback/support,

An equality or IN() query will use the pg_stats most-common-values list,
whereas a range query will us