Re: Performance differential when 0 values present vs when 1 values present. Planner return 52k rows when 0 expected.

2022-04-28 Thread Emil Iggland

> I don't think that index can be used for your original query. It could
> only be used if "channel" is unique in "valueseries" and you'd written
> the query as:

Thanks! That explanation I can understand, now I know how to avoid this 
in future.


> I guess "channel" must not be the primary key to "valueseries" and
> that's why you use an IN().
Correct. We create a new valueseries in some circumstances, so multiple 
valueseries can point to the same channel.





On 2022-04-27 10:22, David Rowley wrote:

On Wed, 27 Apr 2022 at 19:54, Emil Iggland  wrote:


  > You've got the wrong column order (for this query anyway) in that
  > index.  It'd work a lot better if dataview were the first column;



I might be misunderstanding you, but I assume that you are suggesting an
index on (dataview, valuetimestamp).
We have that index, it is the primary key. For some reason it isn't
being selected.


I don't think that index can be used for your original query. It could
only be used if "channel" is unique in "valueseries" and you'd written
the query as:

select * from datavalue
where dataview = (select id from valueseries where channel = 752433)
ORDER BY VALUETIMESTAMP DESC
FETCH FIRST ROW only;

that would allow a backwards index scan using the (dataview,
valuetimestamp) index.  Because you're using the IN clause to possibly
look for multiple "dataview" values matching the given "channel", the
index range scan does not have a single point to start at.  What
you've done with the LATERAL query allows the index to be scanned once
for each "valueseries" row with a "channel" value matching your WHERE
clause.

I guess "channel" must not be the primary key to "valueseries" and
that's why you use an IN().

The above query would return an error if multiple rows were returned
by the subquery.

David





LISTEN NOTIFY sometimes huge delay

2022-04-28 Thread Peter Eser HEUFT [Germany]

Hi all,

I have a table with time series data and on this table a trigger for 
notifies:


containers_notify AFTER INSERT ON containers FOR EACH ROW EXECUTE 
PROCEDURE containers_notify('containers_notify_collector')


and the function does:

PERFORM pg_notify(CAST(TG_ARGV[0] AS text), row_to_json(NEW)::text);

so that another application (java) fetches every inserted row as a JSON 
for further processing every half second:


...listenStatement.execute("LISTEN 'containers_notify_collector'");
...PGNotification notifications[] = 
((org.postgresql.PGConnection)notifyPGCon.getUnderlyingConnection()).getNotifications(); 



This works as a charm but occasionally (I think with more load on the 
system) the notifications are received much time (up to hours!) after 
the INSERTs.
Nevertheless no notifications become lost, they are only very late! The 
delay grows, seems as a queue grows, but the java process tries to fetch 
the notifications fairly fast,

so there should be no queue growing..

Versions:
PostgreSQL 10.12 on x86_64-pc-linux-gnu, compiled by 
x86_64-pc-linux-gnu-gcc (Gentoo 6.4.0-r1 p1.3) 6.4.0, 64-bit

JDBC 42.2.23

The commit of the application inserting the data is ok/fast. So the 
insert of the data is not slowed down.

Are the notifications delivered asynchronously to the commit/trigger?

Thanks for any help,

Peter








Re: LISTEN NOTIFY sometimes huge delay

2022-04-28 Thread Tom Lane
"Peter Eser HEUFT [Germany]"  writes:
> I have a table with time series data and on this table a trigger for 
> notifies:
> containers_notify AFTER INSERT ON containers FOR EACH ROW EXECUTE 
> PROCEDURE containers_notify('containers_notify_collector')
> and the function does:
> PERFORM pg_notify(CAST(TG_ARGV[0] AS text), row_to_json(NEW)::text);

> This works as a charm but occasionally (I think with more load on the 
> system) the notifications are received much time (up to hours!) after 
> the INSERTs.
> Nevertheless no notifications become lost, they are only very late! The 
> delay grows, seems as a queue grows, but the java process tries to fetch 
> the notifications fairly fast,

Hm.  We've not previously had reports of late notifications.  One idea
that comes to mind is that the server won't deliver notifications as
long as the client has an open transaction, so is it possible your
listening process sometimes forgets to close its transaction?

> Versions:
> PostgreSQL 10.12 on x86_64-pc-linux-gnu, compiled by 
> x86_64-pc-linux-gnu-gcc (Gentoo 6.4.0-r1 p1.3) 6.4.0, 64-bit
> JDBC 42.2.23

That's pretty old.  We've made a number of changes to the LISTEN/NOTIFY
code since then; although in reading the commit log entries about them,
nothing is said about long-delayed notifications.

regards, tom lane