Re: Strange runtime partition pruning behaviour with 11.4

2019-08-13 Thread Sverre Boschman
On Tue, Aug 13, 2019 at 8:46 AM Thomas Kellerer  wrote:

> I stumbled across this question on SO:
> https://stackoverflow.com/questions/56517852
>
> Disregarding the part about Postgres 9.3, the example for Postgres 11
> looks a bit confusing.
>
> There is a script to setup test data in that question:
>
>  start of script 
>
> create table foo (
> foo_id integer not null,
> foo_name varchar(10),
> constraint foo_pkey primary key (foo_id)
> );
>
> insert into foo
>   (foo_id, foo_name)
> values
>   (1, 'eeny'),
>   (2, 'meeny'),
>   (3, 'miny'),
>   (4, 'moe'),
>   (5, 'tiger'),
>   (6, 'toe');
>
> create table foo_bar_baz (
> foo_id integer not null,
> bar_id integer not null,
> bazinteger not null,
> constraint foo_bar_baz_pkey primary key (foo_id, bar_id, baz),
> constraint foo_bar_baz_fkey1 foreign key (foo_id)
> references foo (foo_id)
> ) partition by range (foo_id)
> ;
>
> create table if not exists foo_bar_baz_0 partition of foo_bar_baz for
> values from (0) to (1);
> create table if not exists foo_bar_baz_1 partition of foo_bar_baz for
> values from (1) to (2);
> create table if not exists foo_bar_baz_2 partition of foo_bar_baz for
> values from (2) to (3);
> create table if not exists foo_bar_baz_3 partition of foo_bar_baz for
> values from (3) to (4);
> create table if not exists foo_bar_baz_4 partition of foo_bar_baz for
> values from (4) to (5);
> create table if not exists foo_bar_baz_5 partition of foo_bar_baz for
> values from (5) to (6);
>
> with foos_and_bars as (
> select ((random() * 4) + 1)::int as foo_id, bar_id::int
> from generate_series(0, 1499) as t(bar_id)
> ), bazzes as (
> select baz::int
> from generate_series(1, 1500) as t(baz)
> )
> insert into foo_bar_baz (foo_id, bar_id, baz)
> select foo_id, bar_id, baz
> from bazzes as bz
>   join foos_and_bars as fab on mod(bz.baz, fab.foo_id) = 0;
>
>  end of script 
>
> I see the some strange behaviour similar to to what is reported in the
> comments to that question:
>
> When I run the test query immediately after populating the tables with the
> sample data:
>
> explain analyze
> select count(*)
> from foo_bar_baz as fbb
>   join foo on fbb.foo_id = foo.foo_id
> where foo.foo_name = 'eeny'
>
> I do see an "Index Only Scan  (never executed)" in the plan for the
> irrelevant partitions:
>
>   https://explain.depesz.com/s/AqlE
>
> However once I run "analyze foo_bar_baz" (or "vacuum analyze"), Postgres
> chooses to do a "Parallel Seq Scan" for each partition:
>
>   https://explain.depesz.com/s/WwxE
>
> Why does updating the statistics mess up (runtime) partition pruning?
>
>
> I played around with random_page_cost and that didn't change anything.
> I tried to create extended statistics on "foo(id, name)" so that the
> planner would no, that there is only one name per id. No change.
>
> I saw the above behaviour when running this on Windows 10 (my Laptop) or
> CentOS 7 (a test environment on a VM)
>
> On the CentOS server default_statistics_target is set to 100, on my laptop
> it is set to 1000
>
> In both cases the Postgres version was 11.4
>
> Any ideas?
>
> Thomas
>
>
Ran into the same behaviour of the planner. The amount of rows in the
partitions influence the statistics being generated and the statistics in
turn influence the plan chosen.

I managed to force the "correct" plan by manually setting the n_distinct
statistics for the partitioned table.
E.g.: alter table foo_bar_baz alter column foo_id set ( n_distinct=-1,
n_distinct_inherited=-1);

With a certain number of rows in the partitions the analyser sets the
n_distinct value for the partitioned table to the number of unique
partition keys and the n_distinct value
for the individual partitions to number of unique partition keys in that
partition. Unfortunately this causes the planner to pick a plan that
doesn't allow for execution pruning,
resulting in very slow execution times.

Regards,
Sverre


Re: zabbix on postgresql - very slow delete of events

2019-08-13 Thread Kristian Ejvind
Hi.

Just a short message, confirming that after we've altered the tables to have 
matching
types, deletes now take 1 ms, instead of 5 sec. Indexes are being used now.

Thanks for assistance.

Regards
Kristian

ps. would be nice with some warnings or indications in analyze output when this 
happens.





[cid:[email protected]]
Kristian Ejvind
Linux System Administrator
IT Operations | Technical Operations




Resurs Bank
Ekslingan 8
Box 222 09, SE-25467 Helsingborg




Mobil:  +46 728571483
Växel:  +46 42 38 20 00
E-post: [email protected]
Webb:   www.resursbank.se



From: Maxim Boguk 
Date: Wednesday, 24 July 2019 at 19:17
To: Kristian Ejvind 
Cc: "[email protected]" 

Subject: Re: zabbix on postgresql - very slow delete of events



On Wed, Jul 24, 2019 at 6:12 PM Kristian Ejvind 
mailto:[email protected]>> wrote:
Hi Maxim

Thanks for your advice, and let me start with your second email, which I'll 
copy here:

=
Hi Kristian,

After comparing structure of zabbix tables with same in my zabbix installation 
I found one very weird difference.
Why type of events.eventid had been changed from default bigint to numeric?

I suspect that the difference between events.eventid (numeric) type and 
event_recovery.*_eventid (bigint) types might lead to inability of use index 
during foreign key checks.
Anyway it will be clearly visible on the pg_stat_xact_user_tables results (I 
now expect to see 3 sequential scan on event_recovery and may be on some other 
tables as well).

Kind Regards,
Maxim
=

Well spotted! On closer examination it seems that data types are wrong in 
several places. I suspect that this comes
from the time when our Zabbix ran on a MySQL database, which was converted over 
to PostgreSQL a few years
ago. I agree this discrepancy is suspicious and I will continue to examine it.

Regarding your ideas in the email below, I can say that 1) is not valid, disk 
latency is in the range of a few ms.
This is the output from your recommended query, which seems to verify your 
suspicions.

zabbix_34=# begin; delete from zabbix.events where eventid = 7123123; select * 
from pg_stat_xact_user_tables where seq_scan>0 or idx_scan>0 order by 
seq_scan+idx_scan desc; rollback;
Time: 0.113 ms
Time: 4798.189 ms (00:04.798)
relid  | schemaname |relname | seq_scan | seq_tup_read | idx_scan | 
idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd
+++--+--+--+---+---+---+---+---
  41940 | zabbix | event_recovery |3 | 35495224 |0 |
 0 | 0 | 0 | 1 | 0
  41675 | zabbix | alerts |1 |   544966 |1 |
 0 | 0 | 0 | 0 | 0
  42573 | zabbix | problem|2 |13896 |0 |
 0 | 0 | 0 | 0 | 0
  41943 | zabbix | event_tag  |1 |22004 |0 |
 0 | 0 | 0 | 0 | 0
  41649 | zabbix | acknowledges   |1 |   47 |0 |
 0 | 0 | 0 | 0 | 0
  41951 | zabbix | events |0 |0 |1 |
 1 | 0 | 0 | 1 | 0
260215 | zabbix | event_suppress |1 |0 |0 | 
0 | 0 | 0 | 0 | 0

Hi Kristian,

This result definitely proves that indexes not used during foreign key checks 
(see that non-zero seq_scan counters for linked tables).
Only possible reason (IMHO) that wrong usage numeric in place of bigint.
I recommend change types of events.eventid (and any other similar fields) to 
bigint.
It should resolve your performance issues with deletes on events table (as 
additional bonus - bigint a lot faster and compact type than numeric).

--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/

Phone RU: +7  985 433 
Phone UA: +380 99 143 
Phone AU: +61  45 218 5678

LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
Skype: maxim.boguk

"??, ?? ??? ?? ??? ?? ??, ?? ?? ??? ??- ?? 
? ? ??? ? ??? "


Re: Last event per user

2019-08-13 Thread Luís Roberto Weck
On Mon, Aug 12, 2019 at 5:03 PM Luís Roberto Weck 
mailto:[email protected]>> wrote:



If you modify last_user_event_2 to select user and event info in
the view, and just put there where clause directly on the view
which is not joined to anything, instead of on the "extra copy"
of the users table like you were showing previously, I would
expect that the performance should be excellent.


But I need user_id and user_group to be outside of the view
definition. user_id and user_group are dynamic values, as in, I
need to call this query multiple times for different user_ids and
user_groups .


I don't follow. Perhaps there is something within the limitations of 
the ORM layer that I am not expecting. If you have this view-


"last_user_event_2"

SELECT u.*, e.*

   FROM users u

        JOIN LATERAL (SELECT *

FROM events

 WHERE user_id = u.user_id

 AND user_group = u.user_group

 ORDER BY timestamp_inc DESC

 LIMIT 1 ) e ON TRUE


And you execute a query like this-

SELECT * FROM last_user_event_2 e WHERE  user_id = 1272897 and 
user_group = 19117;


Then I would expect very good performance.


You're right, thanks! I just had to do a little adjustment on the 
lateral join. Since both users and events have user_id and user_group, 
PostgreSQL complains that I can't have more than one column with the 
same name. I fixed it by changing the LATERAL condition from "ON TRUE" 
to "USING (user_id,user_group)" (which I didn't even knew I could do).


ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-13 Thread Daulat Ram
Hi team ,
I am getting the below error while fetching the data from Oracle 12c using 
ora2pg.

DBD::Oracle::st fetchall_arrayref failed: ORA-24345: A Truncation or null fetch 
error occurred (DBD SUCCESS_WITH_INFO: OCIStmtFetch, LongReadLen too small 
and/or LongTruncOk not set)ERROR no statement executing (perhaps you need to 
call execute first) [for Statement "SELECT "USERS_ID","NAME","USERS" FROM 
"GBOPSUI"."USER_GROUP_USERS_V5" a"] at /usr/local/share/perl5/Ora2Pg.pm line 
14110.


Initially did not have LongReadLen set, so I thought this was the cause. But, I 
have set LongReadLen, on the db handle, equal to 9000.

Thanks,
Daulat



Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-13 Thread Luca Ferrari
On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram  wrote:
> Initially did not have LongReadLen set, so I thought this was the cause. But, 
> I have set LongReadLen, on the db handle, equal to 9000.

Apparently this is an oracle problem because it acceppted data longer
than its type, so my guess would be that in your table you have a
char(n) column that could be enlarged before the migration.

Hope this helps.
And please report the version of ora2pg when asking for help.

Luca




Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-13 Thread Adrian Klaver

On 8/13/19 10:34 AM, Daulat Ram wrote:

H,

We are using  below the ora2pg version and the data types for tables.

bash-4.2$ ora2pg -v
Ora2Pg v20.0
bash-4.2$

SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;

DATA_TYPE

TIMESTAMP(6)
FLOAT
CLOB
NUMBER
CHAR
DATE
VARCHAR2
BLOB

SQL>

We are getting the same issue for tables which are having blob, clob and char 
data types.


The ora2pg issue below seems to have more information on this:

https://github.com/darold/ora2pg/issues/342



Thanks,
Daulat

-Original Message-
From: Luca Ferrari 
Sent: Tuesday, August 13, 2019 8:32 PM
To: Daulat Ram 
Cc: [email protected]; [email protected]
Subject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg

On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram  wrote:

Initially did not have LongReadLen set, so I thought this was the cause. But, I 
have set LongReadLen, on the db handle, equal to 9000.


Apparently this is an oracle problem because it acceppted data longer than its 
type, so my guess would be that in your table you have a
char(n) column that could be enlarged before the migration.

Hope this helps.
And please report the version of ora2pg when asking for help.

Luca




--
Adrian Klaver
[email protected]




RE: ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-13 Thread Daulat Ram
H,

We are using  below the ora2pg version and the data types for tables.

bash-4.2$ ora2pg -v
Ora2Pg v20.0
bash-4.2$

SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;

DATA_TYPE

TIMESTAMP(6)
FLOAT
CLOB
NUMBER
CHAR
DATE
VARCHAR2
BLOB

SQL>

We are getting the same issue for tables which are having blob, clob and char 
data types.

Thanks,
Daulat

-Original Message-
From: Luca Ferrari  
Sent: Tuesday, August 13, 2019 8:32 PM
To: Daulat Ram 
Cc: [email protected]; [email protected]
Subject: Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg

On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram  wrote:
> Initially did not have LongReadLen set, so I thought this was the cause. But, 
> I have set LongReadLen, on the db handle, equal to 9000.

Apparently this is an oracle problem because it acceppted data longer than its 
type, so my guess would be that in your table you have a
char(n) column that could be enlarged before the migration.

Hope this helps.
And please report the version of ora2pg when asking for help.

Luca