Re: Creating complex track changes database - challenge!

2018-02-26 Thread Thiemo Kellner

Hi Lukasz

I am working on a generic (reading the information schema and other  
database metadata), trigger based solution for SCD tables, i. e.  
tables that keep (or not according to SCD type) history of the data.  
However, it is not far grown and I am not having much time to advance  
it so it evolves very slowly. If you are interested, I would open a  
sourceforge project or the like and we can work on it together.


I am very much surprised that no database I know of supports  
SCD/historising tables out of the box. In 16 years as ETL pro I have  
seen reinvented the wheel all the time... maybe PostgreSQL wants to  
get a head start on this.


Kind regards

Thiemo



Zitat von ?ukasz Jarych :


Hi Guys,

I have idea already for creating this complex solution.

Please give your notes and tips if you have.

1. Keep all changes within table including:
-adding rows
-deleting
-editing

This can be managed by adding triggers and one additional table where you
can have sum up what was changed.

2. Changing DDL of tables:

I think that creating trigger for metadata should solve the problem. How
can i do it? I do not know already ...:)

3. Changing tables versioning.

It it is possible to save table (back up or something) to disc - i can
check the latest date of change and save table with this date and name.
And create table with all tables changes and version.
What do you think ?

4. Still problem with creating whole database versioning.
I found very interesting link but i not understand how it is works:

https://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx

Best,
Jacek

2018-02-26 12:16 GMT+01:00 ?ukasz Jarych :


Hi Manual,
thank you very much!

Regarding your  tool - if it is not supported and it is for specific case
- i will not use it but figure out something new. I do not even how to
install this .hs files...

I thought about creating triggers to have all changes to specific tables.
And for each table name (or number) keep changes in one separate table.
What do you think about it?

If all you need is saving and restoring specific table states, logical

dumps with pg_dump should probably be enough for your needs.



Can you explain in details how can i use it?
What if user add new column? I can save ma table for example as version 3
and come back to version 1 in the future? (without this new column?)

Best,
Jacek



2018-02-26 12:04 GMT+01:00 Manuel Gómez :


On Mon, Feb 26, 2018 at 11:44 AM ?ukasz Jarych 
wrote:


I have to:

1. Keep all changes within table including:
-adding rows
-deleting
-editing

2. Save table with specific state and recover specific state (so go back
to previous table versions) including comparing tables.

3. Track all DLL and DML changes with possibility to ho back to previous
version.



Hi,

I had similar needs long ago, so I wrote this tool I called Squealer,
which would transform a specification of a SQL database schema into some
PostgreSQL DDL to create a database that implements the same schema in a
logical sense, but actually stores all historical rows, and even permits
existing rows to have references to (soft-)deleted rows, all while
providing modifiable views that simulate the behavior of a regular table as
specified in the input schema through generous use of INSTEAD OF triggers.
It works somewhat like having version control for your database.

You may find the source code here: https://github.com/mgomezch/squealer
Unfortunately, it has hardly any comments, it is completely unmaintained
and probably unused anywhere, I have no idea whether it builds with today's
libraries, and it does not necessarily break the tradeoffs in this space in
a way that fits your use case.

Note there are major caveats with keeping all historical data around
forever, and the semantics of querying historical data can get complicated,
let alone having current data refer to deleted, historical data.  I built
this for a very specific use case where this was the right design, but
please consider very carefully whether this is what you want.

Storing your database history forever would take a lot of space.
Consider whether you can instead keep a record of changes stored outside
the database in some cheap cold storage.  Also consider just keeping a set
of tables with dynamically structured event records (e.g. JSON fields)
partitioned by time ranges and retained only temporarily, perhaps even in a
separate database.  Any such solution will have significant cost and
performance impact if your database bears a nontrivial load, so be careful.

You could also just place your database on a PostgreSQL cluster by itself
and then keep all WAL segments archived forever, so you could just do
point-in-time recovery to any point in the history of your database.  The
space required would grow very quickly, though, so if you don't really need
the full history forever, but only a fixed retention period, you can surely
use any of the well-known solutions for PostgreSQL bac

Re: Creating complex track changes database - challenge!

2018-02-26 Thread Thiemo Kellner
I attached what I have got so far. I will setup a shared repository  
these days.


Zitat von Thiemo Kellner :


Hi Lukasz

I am working on a generic (reading the information schema and other  
database metadata), trigger based solution for SCD tables, i. e.  
tables that keep (or not according to SCD type) history of the data.  
However, it is not far grown and I am not having much time to  
advance it so it evolves very slowly. If you are interested, I would  
open a sourceforge project or the like and we can work on it together.


I am very much surprised that no database I know of supports  
SCD/historising tables out of the box. In 16 years as ETL pro I have  
seen reinvented the wheel all the time... maybe PostgreSQL wants to  
get a head start on this.


Kind regards

Thiemo


--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.


full_install.7z
Description: Binary data


Re: Creating complex track changes database - challenge!

2018-02-27 Thread Thiemo Kellner

You can access code with
git clone ssh://@git.code.sf.net/p/pg-scd/code 
pg-scd-code

and browse it at
https://sourceforge.net/p/pg-scd/code/

On 02/27/18 08:43, Thiemo Kellner wrote:
I attached what I have got so far. I will setup a shared repository 
these days.


Zitat von Thiemo Kellner :


Hi Lukasz

I am working on a generic (reading the information schema and other 
database metadata), trigger based solution for SCD tables, i. e. 
tables that keep (or not according to SCD type) history of the data. 
However, it is not far grown and I am not having much time to advance 
it so it evolves very slowly. If you are interested, I would open a 
sourceforge project or the like and we can work on it together.


I am very much surprised that no database I know of supports 
SCD/historising tables out of the box. In 16 years as ETL pro I have 
seen reinvented the wheel all the time... maybe PostgreSQL wants to 
get a head start on this.


Kind regards

Thiemo




--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
SIP/iptel.org: thiemo.kellner
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

Is there a way to get the name of the calling function in pgplsql?

2018-03-21 Thread Thiemo Kellner

Hi all

In a function I would like to log the caller. Is there a way to get  
its name in pgplsql?


Kind regards

Thiemo

--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Autonomous transaction, background worker

2018-03-26 Thread Thiemo Kellner

Hi

I try to setup a logging facility and want it to do its work 
asynchronously in autonomous transactions. I have read 
http://blog.dalibo.com/2016/08/19/Autonoumous_transactions_support_in_PostgreSQL.html 
and chapter 47 of the documentation believing those articles are about 
the same. However, pg_background_launch mentioned in the blog is not 
available on my vanilla installation of Debian 10.3-2 from the 
PostgreSQL repositories.
CHapter 47 makes me believe I need to code some C functions to use 
background workers.


How are things actually?

Kind regards Thiemo

--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
SIP/iptel.org: thiemo.kellner
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

Re: Autonomous transaction, background worker

2018-03-28 Thread Thiemo Kellner
I've experimented with background workers a bit. Yes, normally you'd 
write some C, but you could also install this (mentioned in the blog 
post and the source of pg_background_launch):


Thanks for your answer. I want my logger to be used with the facilities 
there are from PostgreSQL so I'll go the dblink way. The docs write that 
fdw provides a more modern architecture for more ore less the same. 
However, I do not think the execution of arbiträry stuff is possible by fdw.


Kind regards Thiemo

--
SIP/iptel.org: thiemo.kellner
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

dblink: could not send query: another command is already in progress

2018-03-29 Thread Thiemo Kellner

Hi all

I try to use dblink to create a asynchronous logging facility. I have 
the following code


-- open the dblink if it does not yet exist
V_DBLINK_CONNECTION_NAME :=
  GET_PROPERTY_VALUE_STRING(
  I_PROPERTY_NAME => 'DBLINK_CONNECTION_NAME'
  );
select dblink_get_connections() into V_DBLINK_CONNECTION_NAMES;
if (
V_DBLINK_CONNECTION_NAMES is null
 or V_DBLINK_CONNECTION_NAME != any(V_DBLINK_CONNECTION_NAMES)
) then
V_DBLINK_CONNECT_STRING :=
  GET_PROPERTY_VALUE_STRING(
  I_PROPERTY_NAME => 'DBLINK_CONNECT_STRING'
  );
-- better to use dblink_connect_u with password file?
perform dblink_connect(
V_DBLINK_CONNECTION_NAME,
V_DBLINK_CONNECT_STRING
);
end if;

-- send query asynchronously
-- Use literal (%L) as it returns the value null as the 
unquoted

-- string NULL.
V_QUERY := format(
$s$select true $s$ || C_LB ||
$s$  from %I( $s$ || C_LB ||
$s$   I_FUNCTION => %L, $s$ || C_LB ||
$s$   I_MESSAGE => %L, $s$ || C_LB ||
$s$   I_LEVEL => %L, $s$ || C_LB ||
$s$   I_PRESENT_USER => %L, $s$ || C_LB ||
$s$   I_SESSION_USER => %L, $s$ || C_LB ||
$s$   I_TRANSACTION_TIMESTAMP => $s$ ||
$s$ %L::timestamp, $s$ || C_LB ||
$s$   I_TRANSACTION_ID => $s$ ||
$s$ %L::bigint, $s$ || C_LB ||
$s$   I_SERVER_PID => $s$ ||
$s$ %L::bigint, $s$ || C_LB ||
$s$   I_REMOTE_ADDRESS => $s$ ||
$s$ %L::inet, $s$ || C_LB ||
$s$   I_REMOTE_PORT => $s$ ||
$s$ %L::bigint $s$ || C_LB ||
$s$   ); $s$ || C_LB ||
$s$commit $s$,
'WRITE_MESSAGE_TO_TABLE',
C_CALLER_FUNCTION,
I_MESSAGE,
I_LEVEL,
C_PRESENT_USER,
C_SESSION_USER,
C_TRANSACTION_TIMESTAMP,
C_TRANSACTION_ID,
C_SERVER_PID,
C_REMOTE_ADDRESS,
C_REMOTE_PORT
);
-- send query when connection is ready
V_WAIT_FOR :=
  GET_PROPERTY_VALUE_INTERVAL(
  I_PROPERTY_NAME => 'BUSY_WAIT_INTERVAL'
  ); -- to avoid continuous re-querying, already queried here
-- surprisingly, dblink_is_busy does not return boolean, 
but 0 for

-- false
while dblink_is_busy(V_DBLINK_CONNECTION_NAME) != 0 loop
perform pg_sleep_for(V_WAIT_FOR);
end loop;
perform dblink_send_query(
V_DBLINK_CONNECTION_NAME,
V_QUERY
);
raise notice 'Connection busy: %', dblink_is_busy(V_DBLINK_CONNECTION_NAME);
raise notice 'Last error: %', 
dblink_error_message(V_DBLINK_CONNECTION_NAME);
raise notice 'Cancel query: %', 
dblink_cancel_query(V_DBLINK_CONNECTION_NAME);

-- ??? commit needed?
raise notice 'Connection busy: %', dblink_is_busy(V_DBLINK_CONNECTION_NAME);
while dblink_is_busy(V_DBLINK_CONNECTION_NAME) != 0 loop
perform pg_sleep_for(V_WAIT_FOR);
raise notice 'Waited for commit for % seconds', V_WAIT_FOR;
raise notice 'Connection busy: %', dblink_is_busy(V_DBLINK_CONNECTION_NAME);
end loop;
perform dblink_send_query(
V_DBLINK_CONNECTION_NAME,
'commit'
);

I get the following output.
psql:testing/test.pg_sql:41: NOTICE:  Connection busy: 1 

psql:testing/test.pg_sql:41: NOTICE:  Last error: OK 

psql:testing/test.pg_sql:41: NOTICE:  Cancel query: OK 

psql:testing/test.pg_sql:41: NOTICE:  Connection busy: 0 

psql:testing/test.pg_sql:41: NOTICE:  could not send query: another 
command is already in progress


I did all the raise notice and dblink querying and cancelling to get 
some information on what is going on but I am no wiser than before as 
without that the connection was not busy either. But it was still 
blocking I had the second call even though the commit did not seem to 
work and I was trying to send it for good. Btw, there is no entry in the 
logging table which is being done when the same function is called 
without using dblink.


Maybe I am wrong but I tried the solution with dblink_connect 
dblink_send_query instead of simply dblink believing that dblink 
function would open and close a connection at every call. I wante

Re: dblink: could not send query: another command is already in progress

2018-03-30 Thread Thiemo Kellner

On 03/30/18 07:39, Laurenz Albe wrote:

psql:testing/test.pg_sql:41: NOTICE:  Connection busy: 1

psql:testing/test.pg_sql:41: NOTICE:  Last error: OK

psql:testing/test.pg_sql:41: NOTICE:  Cancel query: OK

psql:testing/test.pg_sql:41: NOTICE:  Connection busy: 0

psql:testing/test.pg_sql:41: NOTICE:  could not send query: another
command is already in progress



Has anyone an idea?


The cause of the error message is clear; as the documentation says:


Not to me. As mentioned in my first post, originally I did no cancelling 
and error checking and had the same error. Trying to cancelling did not 
alter behaviour. It boils down to: Why do I get the "another command is 
already in progress" when dblink_is_busy acknowledges the dblink to not 
be busy by returning 0?



   the query will fail soon. You must still complete the normal query protocol,
   for example by calling dblink_get_result.


Ah, maybe this is the underlying problem. If dblink requires that 
results get retrieved by dblink_get_result before the dblink is actually 
ready to receive another query, it would explain the error I get. 
However, I feel than the result of dblink_is_busy is faulty, 
counter-intuitive or just useless in that context. Or I just 
misinterpreted documentation: "checks if connection is busy with an 
async query"


My understand there is that the actual query is still being processed, 
the gathering of the results. I did not count the keeping of the result 
as part of the query.


I hope I can check on that today and get back with my findings.

--
SIP/iptel.org: thiemo.kellner
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

Re: dblink: could not send query: another command is already in progress

2018-03-30 Thread Thiemo Kellner

On 03/30/18 11:14, Laurenz Albe wrote:

You have to consume the result before you can send the next query.


I changed implementation but still get the same error but now different 
context. I tried to retrieve the result but I failed


I committed the last code to its project repository at SourceForge 
https://sourceforge.net/p/pglogger/code/HEAD/tree/


and put the code to pastebin.com
Test calls: https://pastebin.com/xfUp9NAL
function WRITE_MESSAGE: https://pastebin.com/LDjE0Czx
function WRITE_MESSAGE_TO_TABLE: https://pastebin.com/vkBkehZF

Output
psql:testing/test.pg_sql:41: NOTICE:  2018-03-30 18:41:18.245592
   - Calling function:
   - Current user: act
   - Session user: act
   - Transaction timestamp: 2018-03-30 18:41:18.240580
   - Transaction ID:
   - Server process ID: 22401
   - Address of the remote connection:
   - Port of the remote connection:
   - Message: Test WRITE_MESSAGE
 write_message
---

(1 row)

psql:testing/test.pg_sql:42: NOTICE:  2018-03-30 18:41:18.257750
   - Calling function:
   - Current user: act
   - Session user: act
   - Transaction timestamp: 2018-03-30 18:41:18.240580
   - Transaction ID:
   - Server process ID: 22401
   - Address of the remote connection:
   - Port of the remote connection:
   - Message: Test WRITE_MESSAGE 2nd call
psql:testing/test.pg_sql:42: ERROR:  function 
WRITE_MESSAGE_TO_TABLE(i_function => text, i_message => text, i_level => 
text, i_present_user => name, i_session_user => name, 
i_transaction_timestamp => timestamp with time zone, i_transaction_id => 
bigint, i_server_pid => bigint, i_remote_address => inet, i_remote_port 
=> integer) does not exist
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.
CONTEXT:  Error occurred on dblink connection named 
"PGLOGGER_CONNECTION": could not execute query.
SQL statement "SELECT * from 
dblink_get_result(V_DBLINK_CONNECTION_NAME) as t(ID bigint)"

PL/pgSQL function write_message(text,text) line 126 at PERFORM
psql:testing/test.pg_sql:42: STATEMENT:  select WRITE_MESSAGE('Test 
WRITE_MESSAGE 2nd call', 'NOTICE');


--
SIP/iptel.org: thiemo.kellner
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

Re: dblink: could not send query: another command is already in progress

2018-04-03 Thread Thiemo Kellner



On 04/03/18 11:28, Laurenz Albe wrote:

[...]
psql:testing/test.pg_sql:42: ERROR:  function
WRITE_MESSAGE_TO_TABLE(i_function => text, i_message => text, i_level =>
text, i_present_user => name, i_session_user => name,
i_transaction_timestamp => timestamp with time zone, i_transaction_id =>
bigint, i_server_pid => bigint, i_remote_address => inet, i_remote_port
=> integer) does not exist
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.
CONTEXT:  Error occurred on dblink connection named
"PGLOGGER_CONNECTION": could not execute query.
SQL statement "SELECT * from
dblink_get_result(V_DBLINK_CONNECTION_NAME) as t(ID bigint)"
PL/pgSQL function write_message(text,text) line 126 at PERFORM
psql:testing/test.pg_sql:42: STATEMENT:  select WRITE_MESSAGE('Test
WRITE_MESSAGE 2nd call', 'NOTICE');


I couldn't dig through all your code, but two things seemed suspicious:


Thanks for looking into all the same.


if (
 V_DBLINK_CONNECTION_NAMES is null
  or V_DBLINK_CONNECTION_NAME != any(V_DBLINK_CONNECTION_NAMES)


I don't think you want "!=ANY" there.  Shouldn't that be "<>ALL" ?
"<>ANY" will *always* be true if there are at least two different entries.


In my simple test it works as expected with respect to that. But I did 
use it the wrong way as I thought that ! would invert the result of the 
any operation as a whole. I changed it to != all but the error persists.



The other thing is that you seem to call "dblink_get_result" on any existing
connection before use.  But you can only call the function if there is a
result outstanding.


I call dblink_get_result only if I do not open a dblink connection, i. 
e. only on second and following function calls. I put more notice output 
into the code showing that dblink_send_query has been called once before 
the first call of dblink_get_result. I changed my query to reflect 
return bigint value of the called function write_message_to_table. Error 
persists.


I put a new pastebin https://pastebin.com/7R45R7qK and committed the 
changes.


--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
SIP/iptel.org: thiemo.kellner
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

dblink: give search_path

2018-04-10 Thread Thiemo Kellner

Hi all

I try to execute a function not in the Schema I connect to with  
dblink. Is there way to tell dblink to set search_path in a specific  
way? I have not found a solution in the documentation. I tried with  
the set search_path definition in the function declarations to no avail.


Function Schema: logger
Database: act
User: act
User Default Schema: act

Kind regards

Thiemo

--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



psql variable to plpgsql?

2018-04-10 Thread Thiemo Kellner

Hi all

Is there a way to pass the value of a psql variable into function  
code? I create a schema with help of psql variable


   \set SCHEMA_NAME LOGGER
   create
 schema :SCHEMA_NAME;

I would like to create a function that has the Schema hard coded like
   declare
  V_SCHEMA_NAME text := :SCHEMA_NAME;
   begin
but as the plpgsql code is within quotes, it Fails.

Kind regards

Thiemo

--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: psql variable to plpgsql?

2018-04-17 Thread Thiemo Kellner

Zitat von Pavel Stehule :


no. The :xxx is not evaluated inside string. The workaround is using GUC
variables and related functions. Can be used from psql and from plpgsql too.


Hi Pavel, thanks for pointing this out. However, I implemented another  
solution with dedicated PostgreSQL user where I do not need such  
variables in literals.


--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets

2018-04-17 Thread Thiemo Kellner

Hi all

When running following query in psql (server and client version 10 with 
replication on Debian 9), I get the message mentioned in the subject. I 
have not found much searching the internet. There were suggestions on 
bloat so I ran "vacuum (verbose, full, analyze)" but the message remains.


with PRO_UNNESTED_TYPES as(
  select
oid as PROOID,
PRONAME,
unnest(PROARGTYPES) as PROARGTYPE,
PRONAMESPACE,
PROOWNER
  from
PG_CATALOG.PG_PROC
) select
  P.PRONAME,
  string_agg(
T.TYPNAME,
', '
  ) as PARAMETER_LIST_STRING,
  G.GRANTEE
from
  PRO_UNNESTED_TYPES P
inner join PG_CATALOG.PG_TYPE T on
  P.PROARGTYPE = T.OID
inner join PG_CATALOG.PG_NAMESPACE N on
  P.PRONAMESPACE = N.OID
inner join INFORMATION_SCHEMA.ROUTINE_PRIVILEGES G on
  -- copied from INFORMATION_SCHEMA.ROUTINE_PRIVILEGES source
 -- as seen in DBeaver 4.3.2
(
(
  P.PRONAME::text || '_'::text
)|| P.PROOID::text
  )::INFORMATION_SCHEMA.SQL_IDENTIFIER = G.SPECIFIC_NAME
where
  N.NSPNAME = current_user
  and G.GRANTEE != current_user
group by
  P.PROOID,
  P.PRONAME,
  G.GRANTEE
order by
  P.PRONAME asc,
  G.GRANTEE asc,
  PARAMETER_LIST_STRING asc;

I use this installation to develop and for the time being I install and 
re-install a couple of functions only 3 tables an a single view. I 
install in a proper schema which gets re-created at the beginning of my 
install script.


I ran also following statement I found on the net to get an idea on 
bloat in my database.


pg_depend_reference_index   944 kB
pg_proc 904 kB
pg_depend_depender_index880 kB
pg_largeobject_metadata_oid_index   8192 bytes
pg_enum_typid_sortorder_index   8192 bytes
pg_enum_typid_label_index   8192 bytes
pg_largeobject_loid_pn_index8192 bytes
pg_enum_oid_index   8192 bytes
pg_statistic_ext_oid_index  8192 bytes
pg_statistic_ext_name_index 8192 bytes

I am quite ok with pg_proc, however I do not know why the depend tables 
are so big and whether this is normal. The rest is fine by me too.


select
  relname,
  pg_size_pretty(
pg_relation_size(C.oid)
  )
from
  pg_class C
left join pg_namespace N on
  (
N.oid = C.relnamespace
  )
where
  nspname = 'pg_catalog'
order by
  2 desc limit 10;

I do not feel that my DB has a problem but I was taken aback a bit when 
I first saw the message in the subject.


I would be grateful about a bit shed light.

Kind regards Thiemo

--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

array UNNESTed to rows stable with respect to order?

2018-04-17 Thread Thiemo Kellner

Hi all

I have created following statement to get the ordered parameter list of 
functions. I use UNNEST to get rows from array. This works fine but I am 
not sure whether the ordering remains in the later use. Background is 
PL/pgSQL to revoke grants to get a pristine start for granting. As the 
order of the parameter is important, I should use a statement that 
returns the properly ordered list of parameters. Maybe I did take a 
wrong turn and one can achieve this simpler.


Suggestions are very welcome.

Kind regards Thiemo

with PRO_UNNESTED_TYPES as(
  select
oid as PROOID,
PRONAME,
unnest(PROARGTYPES) as PROARGTYPE,
PRONAMESPACE,
PROOWNER
  from
PG_CATALOG.PG_PROC
) select
  P.PRONAME,
  string_agg(
T.TYPNAME,
', '
  ) as PARAMETER_LIST_STRING,
  G.GRANTEE
from
  PRO_UNNESTED_TYPES P
inner join PG_CATALOG.PG_TYPE T on
  P.PROARGTYPE = T.OID
inner join PG_CATALOG.PG_NAMESPACE N on
  P.PRONAMESPACE = N.OID
inner join INFORMATION_SCHEMA.ROUTINE_PRIVILEGES G on
  -- copied from INFORMATION_SCHEMA.ROUTINE_PRIVILEGES source
 -- as seen in DBeaver 4.3.2
(
(
  P.PRONAME::text || '_'::text
)|| P.PROOID::text
  )::INFORMATION_SCHEMA.SQL_IDENTIFIER = G.SPECIFIC_NAME
where
  N.NSPNAME = current_user
  and G.GRANTEE != current_user
group by
  P.PROOID,
  P.PRONAME,
  G.GRANTEE
order by
  P.PRONAME asc,
  G.GRANTEE asc,
  PARAMETER_LIST_STRING asc;



--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

Re: array UNNESTed to rows stable with respect to order?

2018-04-17 Thread Thiemo Kellner

Zitat von Paul Jungwirth :

I think you are looking for `WITH ORDINALITY` (in pg 9.4+). For  
instance you could rewrite your first CTE like so:


Thanks for the hint.

Kind regards

--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: Microsoft buys GitHub, is this a threat to open-source

2018-06-07 Thread Thiemo Kellner

Zitat von Achilleas Mantzios :


Who hasn't missed sourceforge ? or ... freshmeat while we'are at it :)


I am sticking to sourceforge still. I never understood what people  
made leave it. I was investigating a bit if I should move on to github  
too but I do not remember what prevented me from doing so with respect  
to github. It remains an obscure gut feeling that it is not doing the  
right Thing with the data. I am sad freshmeat was discontinued  
independently from sf or github or what so ever.


cheers


This message was sent using IMP, the Internet Messaging Program.




Re: Can you make a simple view non-updatable?

2018-06-08 Thread Thiemo Kellner

Zitat von Ryan Murphy :


Is there any way to set a VIEW to be read-only -- specifically, can I do
this for a view that is automatically updatable due to being simple?


Without saying anything about if this is directly possible, using  
different users with appropriate grants Comes to my mind, i. e. maybe  
it is time to overhaul the security concept.



Cheer, Thiemo


This message was sent using IMP, the Internet Messaging Program.




Re: Question about getting values from range of dates

2018-06-22 Thread Thiemo Kellner

Hi Mike

Zitat von Mike Martin :


I have entries for files in database which lack specific values (xml files
not generated)
These values can be obtained by what are called DTC read files, where the
values are the same in the last DTC read file before date of file and the
next DTC read file (by date)

This code works but looks horrendous, so would appreciate any ideas.


Bit is formatted so little?

Sorry, I did not quite get the functional specification here, so I  
just put forward a technical optimised proposal. It seems to me  
equivalent to the following. Note I used Union ALL because the data  
sets of the two cases seem to be disjoint to me. The ALL makes  
PostgreSQL leave out the discarding process of identical rows (in the  
complete set, i. e. if there are identical rows in one part of the set  
they will be singled as well). Column aliasing is partly necessary  
partly to be clearer.


with BASE as
 (
select a.recordingdate as recordingdate_a,
   b.recordingdate as recordingdate_b,
   a.registration  as registration_a,
   b.filename  as filename_b
  from create_tdmsa
 join (
 select registration,
recordingdate,
filename
   from create_tdms
  where filename not like 'DTC%') b
   on b.registration = a.registration
 where a.filename like 'DTC%'
 )
  select max (recordingdate_a) as max_recordingdate_a,
 max (recordingdate_b) as max_recordingdate_b,
 registration_aas registration,
 max (filename_b)  as filename
from BASE
   where recordingdate_b < recordingdate_a
group by registration_a
union all
  select max (recordingdate_a) as max_recordingdate_a,
 max (recordingdate_b) as max_recordingdate_b,
 registration_aas registration,
 max (filename_b)  as filename
from BASE
   where recordingdate_b > recordingdate_a
group by registration_a

Kind regards Thiemo


This message was sent using IMP, the Internet Messaging Program.




Re: Database name with semicolon

2018-06-28 Thread Thiemo Kellner

Zitat von "joby.john@nccgroup.trust" :


Not sure what else I can try or whether the ODBC driver supports database
name with a semicolon at all.


Not knowing much about ODBC, how about single quotes?

And if you pass the entire URL, would it be possible to pass instead  
values to Kind of function?


Kind regards Thiemo




This message was sent using IMP, the Internet Messaging Program.




RFC on pglogger

2018-07-14 Thread Thiemo Kellner
Hi

I am quite happy to announce the first release package of pglogger for
structured logging to table and/or standard out. I crafted it inspired
by log4j in the hope I could give back the community a bit. You find
details at https://sourceforge.net/p/pglogger/wiki/Home/

Kind regards

Thiemo



Re: sql questions

2018-07-20 Thread Thiemo Kellner

Zitat von haman...@t-online.de:


a) I am running some select query
select ... order by 
Now, I would like to preserver the ordering through further  
processing by adding a sequence number

Of course I can do:
create temp sequence mseq;
select xx.*, nextval('mseq') as ord from (select ... order by ) xx;
drop sequence mseq;
Is there a simpler way (avoiding the create/drop parts)


Can't you just do the ordering at the end of the processing? Maybe you  
need to drag along the order by columns and just dump them at the very  
end if applicable.



This message was sent using IMP, the Internet Messaging Program.



binLSH7ZizYEL.bin
Description: PGP Public Key


PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner



Hi all

I am designing a framework for historisation implementation (SCD). One  
feature I would like to provide is a table in that the actual state of  
an entity is put and if this is complete, this history table is  
"updated":


   -
==>  | ENTITY_ACT |  ==>  | ENTITY_HIST |
   -

I plan to use instead-of-triggers on the hist table that read the  
actual table and perfoms all necessary inserts und updates on the  
history table. If I want the termination of a record version (actually  
the record of a specific business key with a specific payload) to get  
propagated up and/or down referential integrities (no overlapping  
validities) I have to make sure that only one of those processes is  
modifying a table. I was thinking of a scheduler queue where the  
trigger would put a process request and PostgreSQL would work through.  
Is there a scheduler within PostgreSQL? I read the documentation and  
searched the web but could not find a hint. But before going another  
road or implementing something myself, I ask. Maybe this design is no  
good at all.


Kind regards

Thiemo




Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner
I have seen pg_cron but it is not what I am looking for. It schedules  
tasks only by time. I am looking for a fifo queue. pg_cron neither  
prevents from simultaneous runs I believe.


Quoting Thomas Kellerer :


There is no built-in scheduler, but there is an extension that supplies that

https://github.com/citusdata/pg_cron







Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner
This is a queue but if I am not mistaken, it is outside PostgreSQL  
where to go I am very reluctant. I will look at it in more depth.  
Thanks!


Quoting Ron :


Maybe https://github.com/chanks/que is what you need.

On 09/05/2018 02:35 PM, Thiemo Kellner wrote:
I have seen pg_cron but it is not what I am looking for. It  
schedules tasks only by time. I am looking for a fifo queue.  
pg_cron neither prevents from simultaneous runs I believe.







Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner
Because I am looking for a fifo queue and not for time schedule.  
Thanks anyway.


Quoting Tim Clarke :



Why not just call your "do a scheduled run" code from cron?

Tim Clarke







Re: PostgreSQL intenal scheduler?

2018-09-05 Thread Thiemo Kellner
Thanks for the hint. Your solution seems to be good. However, I am  
designing my framework for fun, to give back something to the  
community and because I think it's time that historisation is not  
reinvented and reimplemented again and again.


Having that said, I think I can get rid of the need to cascade changes  
up and down the refential constraints by the following modelling. I am  
aware that I would restrict the use of the framework to this model.


  -
==>  | ENTITY_HEAD |>--..| PARENT_ENTITIY_HEAD |
 //   -
//  |
   //   |
==>  | ENTITY_ACT |  == |
   \\   |
\\ /|\
 \\  -
   ==>  | ENTITY_HIST |
 -

The head table only contains the business key, the business surrogate  
key and foreign key attributes to parent tables. Strictly, I can  
contain also payload attributes that never ever ever ever ever ever  
change their value. But who is to guarantee that? However, It is SCD0  
so once a value is entered it never changes.


The history table contains all the other attributes the payload so to  
speak and get the historisation. Version matching between related  
entities could be done by a generated view so no one would have to  
re-invent the wheel.


One could generate the head table from the business key in the actual  
table and generate the hist table from the actual table without the  
business key and a template table with history attributes. Can one  
exclude attributes from inheritance?


Quoting Jeremy Finzel :

If I follow your use case, we have written something that just may  
fit your scenario and plan to open source it rather soon.


   
  It has several layers but let me boil it down. First we use an  
open sourced auditing system to log changes to the source tables.  
This becomes your queue. A postgres background worker will  
asynchronously process these changes based on your configuration,  
which is highly configurable. It also handles the concurrency you  
are questioning.

   
  This allows you to build history tables without requiring you for  
example to do it directly via a trigger. It also removes redundancy  
if you have the same key updated multiple times. It assumes we are  
fine with the data built not being 100% up to date data because  
these updates obviously don’t all happen in the same transaction as  
the source data change.

   
  Let me know if this interests you and I can share more.
   
  Thanks,
  Jeremy   





Privilege mess?

2018-10-09 Thread Thiemo Kellner



Hi all

I installed pglogger (https://sourceforge.net/projects/pglogger/) and  
try to insert into the "level" table as user "act" but it fails  
claiming insufficient privileges even though insert is granted to  
public (see below). What am I missing?


Kind regards Thiemo


thiemo @ thiemos-toshi ~/external_projects/pg-scd-code/trunk :-( %  
psql -U act 
 psql (10.5 (Debian  
10.5-1.pgdg90+1))

Type "help" for help.

act=> insert into logger.LEVEL (
act(> SCOPE,
act(> LEVEL
act(> ) values (
act(> 'inline_code_block',
act(> 'INFO'
act(> );
ERROR:  permission denied for schema logger
LINE 1: insert into logger.LEVEL (
^
act=> SELECT grantor, grantee, table_catalog, table_schema,  
table_name, privilege_type

act->   FROM information_schema.table_privileges
act->  WHERE grantor = 'logger'
act->AND table_schema = 'logger'
act->AND table_name = 'level';
 grantor | grantee | table_catalog | table_schema | table_name |  
privilege_type

-+-+---+--++
 logger  | PUBLIC  | act   | logger   | level  | INSERT
 logger  | PUBLIC  | act   | logger   | level  | SELECT
 logger  | PUBLIC  | act   | logger   | level  | UPDATE
 logger  | PUBLIC  | act   | logger   | level  | DELETE
(4 rows)




Re: Privilege mess?

2018-10-09 Thread Thiemo Kellner



Quoting Christoph Moench-Tegeder :


Schema privileges.
provileges. I'd guess you miss USAGE on schema logger.


Thanks for the hint. I did not know about a usage grant coming from  
Oracle, so I completely ignored the possibility of the absence of a  
different grant. I just read the documentation but I cannot get grip  
on the sense behind it:


'For schemas, ... this allows the grantee to “look up” objects within  
the schema. Without this permission, it is still possible to see the  
object names, e.g. by querying the system tables.'


Does it not say you do not need the usage privilege as you can query  
the data catalog anyway to get the object's details? And in deed,  
DBeaver queries the details of the object without the usage privilege.


To carry out actions on objects one needs the specific grant like  
select anyway. I do not see the point of usage privilege.


I would be grateful I some could shed some more light for me.

Kind regards Thiemo




Re: Privilege mess?

2018-10-09 Thread Thiemo Kellner

 Quoting "David G. Johnston" :

Layers of security.  But yes it is generally sufficient enough to  
simply allow usage on scheme without much

thought while ensuring contained objects are sufficiently secured.


Thanks :-)




Re: What is the problem with this code?

2018-10-19 Thread Thiemo Kellner
In your place I would double check whether the table structure on the  
database is what you expect. Without knowing the code of mentioned  
function there seem only two numbers in the call. It quite misty in  
the crystal fortune telling ball to me.


Quoting Igor Korot :


Does anybody have an idea why the code below fails?

Thank you.


-- Forwarded message -
From: Igor Korot 
Date: Thu, Oct 18, 2018 at 11:49 PM
Subject: What is the problem with this code?
To: PostgreSQL ODBC list 


Hi, ALL,

[code]
std::wstring query1 = L"SHOW log_directory";
std::wstring query2 = L"SHOW log_filename";
SQLWCHAR *qry1 = new SQLWCHAR[query1.length() + 2];
SQLWCHAR *qry2 = new SQLWCHAR[query2.length() + 2];
memset( qry1, '\0', query1.length() + 2 );
memset( qry2, '\0', query2.length() + 2 );
uc_to_str_cpy( qry1, query1 );
uc_to_str_cpy( qry2, query2 );
RETCODE ret = SQLAllocHandle( SQL_HANDLE_STMT, m_hdbc, &m_hstmt );
if( ret == SQL_SUCCESS || ret == SQL_SUCCESS_WITH_INFO )
{
ret = SQLPrepare( m_hstmt, qry1, SQL_NTS );
if( ret != SQL_SUCCESS && ret != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1 );
result = 1;
}
else
{
ret = SQLDescribeCol( m_hstmt, 1, columnName, 256,
&columnNameLen, &columnDataType, &columnDataSize, &columnDataDigits,
&columnDataNullable );
if( ret != SQL_SUCCESS && ret != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1 );
result = 1;
}
[/code]

The SQLDescribeCol() call fails with the error invalid column number".

Does anybody have any idea?

Thank you.







timestamp out of range while casting return value to function's return type

2019-08-23 Thread Thiemo Kellner

Hi all

I created a function that ought to return a timestamptz (another  
likewise timestamp) but calling it, I get mentionied error. What do I  
miss? I tried to return a timestamp of the year 2000 to no avail.


Call: select utils.get_max_timestamptz();
--

Function
--
create or replace function GET_MAX_TIMESTAMPTZ()
  returns timestamptz
  language plpgsql
  immutable
  -- Include the hosting schema into search_path so that dblink
  -- can find the pglogger objects. There is no need to access
  -- objects in other schematas not covered with public.
  as
$body$
begin
-- highest timestamps on 64bit lubuntu vanilla PostgreSQL 11.3
return '294277-01-01 00:59:59.99'::timestamptz;
end;
$body$;

Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Re: timestamp out of range while casting return value to function's return type

2019-08-23 Thread Thiemo Kellner

Hi Tom

Thanks for replying so fast. You are absolutely right.

I changed the code file but failed to install it. :-( I am sorry for  
not checking the obvious.


Kind regards

Thiemo

Quoting Tom Lane :


Thiemo Kellner  writes:

I created a function that ought to return a timestamptz (another
likewise timestamp) but calling it, I get mentionied error. What do I
miss?


Whether that:


 return '294277-01-01 00:59:59.99'::timestamptz;


is in range or not depends on your timezone setting.  It would
be considered in-range in UTC+1 or further east (so that the date
wraps back to 294276AD); in or west of Greenwich, not so much.


I tried to return a timestamp of the year 2000 to no avail.


Hardly ...

regards, tom lane




--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Re: timestamp out of range while casting return value to function's return type

2019-08-24 Thread Thiemo Kellner

Hi Karsten

Thanks for the infinitly good hint. I remembered the infinity  
blurredly somewhen this morning, looked it up in the docs and already  
dumped my functions in favour of the infinity solution. :-) Great,  
that PostgreSQL has the infinity concept! Thanks


Kind regards

Thiemo

Quoting Karsten Hilbert :


On Sat, Aug 24, 2019 at 12:57:07AM +, Thiemo Kellner wrote:


Call: select utils.get_max_timestamptz();
--

Function
--
create or replace function GET_MAX_TIMESTAMPTZ()
  returns timestamptz
  language plpgsql
  immutable
  -- Include the hosting schema into search_path so that dblink
  -- can find the pglogger objects. There is no need to access
  -- objects in other schematas not covered with public.
  as
$body$
begin
-- highest timestamps on 64bit lubuntu vanilla PostgreSQL 11.3
return '294277-01-01 00:59:59.99'::timestamptz;
end;
$body$;


Also, but that's a nitpick perhaps not relevant to your use case:

This

$> psql -d gnumed_v22 -U 
psql (11.5 (Debian 11.5-1+deb10u1))

gnumed_v22=> select 'infinity'::timestamptz;
-[ RECORD 1 ]-
timestamptz | infinity

gnumed_v22=>

is the highest timestamp.

(You *can* count the horses in *your* corral but there's
 always more of them elsewhere ;-)

Just so you are aware.

Best,
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Re: pgmodeler ?

2019-09-01 Thread Thiemo Kellner

 Quoting Olivier Gautherot :


This is the specific error message:


Could not execute the SQL command.
Message returned: ERROR: column pr.proisagg does not exist
LINE 1: ...namespace AS ns ON pr.pronamespace = ns.oid WHERE pr.proisag...
^
HINT: Perhaps you meant to reference the column

--
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
                                                -- Benjamin Franklin


 
Can you confirm the versions of pgModeler and PG? Pgmodeler 0.91  
does not fully support PG 11.x and there is an update in preparation  
in 0.92 which should. You may want to givi it a try.

 


Actually, this is a known problem:  
https://github.com/pgmodeler/pgmodeler/issues/1281


Maybe you want to give the beta a shot: https://pgmodeler.io/download

Kind regards

Thiemo
 S/MIME Public Key:
https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Re: Posible off topic ? pgmodeler

2019-09-02 Thread Thiemo Kellner


Quoting stan :

What I am trying to do, at the moment is get a complete  
understanding of their

bossiness model, regarding the source code for this project.

Thanks for any input on this.


To the best of my knowledge Raphael tries to fund its time on the  
development of pgmodeler with selling access keys to the binary  
package he distributes. https://pgmodeler.io/download states


"Purchase an access key and support this project. Pay using PayPal™ or  
Bitcoins and enjoy ready-to-use packages."


However, being versioned on github you could compile it yourself any  
time and I suppose at least major Linux distributions provide a  
package of a more or less recent version of it. Ubuntu does anyway. I  
am not aware of an other binary distributor but Raphael for Windows or  
Mac, but then again those are not my turf.


Kind regards Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Use of ?get diagnostics'?

2019-09-21 Thread Thiemo Kellner

Hi all

I try to create a function (code at https://pastebin.com/mTs18B90)  
using 'get diagnostics' to retrieve the number of affected rows.  
However, it throws


  the following exception was thrown:
SQLSTATE: 42703
column "row_count" does not exist

when I test it with

drop table if exists TEST_EXECUTE_WO_RETURN_LOGGED;
create table TEST_EXECUTE_WO_RETURN_LOGGED(I bigint, C char(2));
commit;
select EXECUTE_WO_RETURN_LOGGED(
I_STATEMENT_TO_EXECUTE  => $$insert into  
TEST_EXECUTE_WO_RETURN_LOGGED(I, C) values (1, 'ab');$$,

I_LEVEL => 'LOG',
I_REPORT_ERRORS_ONLY=> true
);


If you want to try out the code, be aware that it uses pglogger and  
pgutils (both on SourceForge maybe not there in the version yet  
needed, work is ongoing) such that you might want to strip the  
respective calls.


I created another function using 'get diagnostics' that works - it is  
part of pglogger. Code snipped


$body$
declare
C_LOGGING_LEVEL_PROPERTY_NAME constant text := 'LOGGING_LEVEL';
V_ROW_COUNT bigint;
begin
update PROPERTY
   set PROPERTY_VALUE_STRING = I_LEVEL
 where PROPERTY_NAME = C_LOGGING_LEVEL_PROPERTY_NAME;
get current diagnostics V_ROW_COUNT = ROW_COUNT;


I did not find the error I am making.

Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Re: Use of ?get diagnostics'?

2019-09-22 Thread Thiemo Kellner

Hi Andrew


Paste sites are for IRC, on the mailing list you should always attach
the necessary details to your message.


Ok, I was under the impression that paste site were preferable to  
attachments which generates traffic not everyone is interested in.



 Thiemo>   the following exception was thrown:
 Thiemo> SQLSTATE: 42703
 Thiemo> column "row_count" does not exist

line 44 of your paste:  V_TEXT := V_TEXT || ROW_COUNT || ' row.';

should be V_ROW_COUNT, I suspect. Likewise line 46.


You are perfectly right and now I feel a bit stupid. Many thanks!

Maybe others had the same idea, but it would help me, if the exception  
contained a line where the error was found. Though, I am not quite  
sure whether this is just due to my error handling in the function.


Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Re: Use of ?get diagnostics'?

2019-09-25 Thread Thiemo Kellner

Hello Adrian

Quoting Adrian Klaver :

To get above I believe you will need to use GET CURRENT DIAGNOSTICS  
PG_CONTEXT:


I actually use "get stacked diagnostics" to retrieve the exception  
place. And it works. I am not sure why I did no see it.


However, I noticed, that the stack does not include the error place in  
dynamic SQL executed by the "execute" command. Maybe I am missing  
something again.


Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






pgutils, pglogger and pgutilsL out

2019-10-09 Thread Thiemo Kellner

Hi all

I do not mean to spam so please tell me if this is not the right place  
for release announcements of OSS software for PostgreSQL. Be it as  
may, I am happy to have:


 - pgutils out: providing very basic functionality for PostgreSQL  
base applications (https://sourceforge.net/p/pgutils/wiki/Home/)


 - pglogger in a new version: a logging facility inspired by log4j  
(https://sourceforge.net/p/pglogger/wiki/Home/)


 - pgutilsL out: providing common functionality for PostgreSQL being  
logged by pglogger (https://sourceforge.net/p/pgutilsl/wiki/Home/)


pgutils and especially pgutilsL are small but it is a beginning after all. :-)

Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Re: SQL pretty pritner?

2019-10-28 Thread Thiemo Kellner

https://www.sqlinform.com/

Quoting stan :


I have a presentation later in the week, and i plan on printing out some
files containing SQL commands. I have used some "pretty printers" in the
past for things like Perl scripts. What I am thinking of s something that
bolds keywords, handles page breaks, and does some formatting.

Development being done in an Ubuntu Linux environment.

Anyone have a recommendation?

--
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Possible trigger bug? function call argument literalised

2021-01-03 Thread Thiemo Kellner

Hi

I created a table with trigger and according trigger and trigger function as

drop table if exists CALCULATION_METHOD cascade;
create table CALCULATION_METHOD (ID
   uuid
   not null
   default uuid_generate_v4(),
 METHOD_NAME
   text
   not null,
 DB_ROUTINE_NAME
   name
   not null,
 ENTRY_PIT
   timestamptz
   not null
   default transaction_timestamp(),
 REMARKS
   text,
 constraint CALCULATION_METHOD_PK
   primary key (ID),
 constraint CALCULATION_METHOD_UQ
   unique (DB_ROUTINE_NAME));

create or replace function METHOD_CHECK()
  returns trigger
  language plpgsql
  stable
  as
$body$
declare
V_COUNT smallint;
begin
if TG_NARGS != 1 then
raise
  exception
  using
message = 'METHOD_CHECK expects the schema name to be  
passed and nothing more! There have been passed ' ||

TG_NARGS || ' arguments.',
hint = 'Please check the trigger "' || TG_NAME ||
 '" on table "' || TG_TABLE_NAME || '" in schema "' ||
 TG_TABLE_SCHEMA || '".';
end if;
select COUNT(*) into V_COUNT
  from INFORMATION_SCHEMA.ROUTINES
 where ROUTINE_SCHEMA   = TG_ARGV[1]
   and ROUTINE_NAME = NEW.DB_ROUTINE_NAME;
if V_COUNT != 1 then
raise exception ' expects the schema name to be passed!';
end if;
insert into TG_TABLE_SCHEMA.TG_TABLE_NAME values (NEW.*);
return NULL;
end;
$body$;

create trigger CALCULATION_METHOD_BR_IU
  before insert on CALCULATION_METHOD
  for each row
  execute function METHOD_CHECK(current_schema);


Executing such, the string "current_schema" gets literalised, i.e.  
single quoted:

norge=# \d calculation_method
  Table "public.calculation_method"
 Column  |   Type   | Collation | Nullable |   
   Default

-+--+---+--+-
 id  | uuid |   | not null |  
uuid_generate_v4()

 method_name | text |   | not null |
 db_routine_name | name |   | not null |
 entry_pit   | timestamp with time zone |   | not null |  
transaction_timestamp()

 remarks | text |   |  |
Indexes:
"calculation_method_pk" PRIMARY KEY, btree (id)
"calculation_method_uq" UNIQUE CONSTRAINT, btree (db_routine_name)
Triggers:
calculation_method_br_iu BEFORE INSERT ON calculation_method FOR  
EACH ROW EXECUTE FUNCTION method_check('current_schema')



I am using
norge=# select version();
  version

 PostgreSQL 12.5 on x86_64-suse-linux-gnu, compiled by gcc (SUSE  
Linux) 10.2.1 20201028 [revision  
a78cd759754c92cecbf235ac9b447dcdff6c6e2f], 64-bit


I strongly feel this is a bug, at least no intention at all from my  
side. However, before filing a bug, I wanted to get your opinion on  
that. Maybe it is just a problem of the openSUSE Tumbleweed repository.


I would appreciate your two dimes. Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37





Re: Possible trigger bug? function call argument literalised

2021-01-03 Thread Thiemo Kellner

I fixed the array and error handling of the function.

create or replace function METHOD_CHECK()
  returns trigger
  language plpgsql
  stable
  as
$body$
declare
V_COUNT smallint;
begin
if TG_NARGS != 1 then
raise
  exception
  using
message = 'METHOD_CHECK expects the schema name to be  
passed and nothing more! There have been passed ' ||

TG_NARGS || ' arguments.',
hint = 'Please check the trigger "' || TG_NAME ||
 '" on table "' || TG_TABLE_NAME || '" in schema "' ||
 TG_TABLE_SCHEMA || '".';
end if;

select COUNT(*) into V_COUNT
  from INFORMATION_SCHEMA.SCHEMATA
 where SCHEMA_NAME   = TG_ARGV[0];
if V_COUNT != 1 then
raise
  exception
  using
message = 'Schema ' || coalesce('"' || TG_ARGV[0] || '"',
'ω/NULL') ||
'" could not be found!',
hint = 'Please check the trigger "' || TG_NAME ||
 '" on table "' || TG_TABLE_NAME || '" in schema "' ||
 TG_TABLE_SCHEMA || '".';
end if;

select COUNT(*) into V_COUNT
  from INFORMATION_SCHEMA.ROUTINES
 where ROUTINE_SCHEMA   = TG_ARGV[0]
   and ROUTINE_NAME = NEW.DB_ROUTINE_NAME;
if V_COUNT != 1 then
raise
  exception
  using
message = 'Routine "' || NEW.DB_ROUTINE_NAME ||
'" could not be found in schema "' ||
TG_ARGV[0] || '!',
hint = 'Install the routine beforehand.';
end if;
insert into TG_TABLE_SCHEMA.TG_TABLE_NAME values (NEW.*);
return NULL;
end;
$body$;


Running this version, I get another proof that the term was literalised:
psql:common_calculation_method_insert.pg_sql:59: ERROR:  Schema  
"current_schema" could not be found!
HINT:  Please check the trigger "calculation_method_br_iu" on table  
"calculation_method" in schema "public".

CONTEXT:  PL/pgSQL function method_check() line 20 at RAISE


Quoting Thiemo Kellner :


Hi

I created a table with trigger and according trigger and trigger function as

drop table if exists CALCULATION_METHOD cascade;
create table CALCULATION_METHOD (ID
   uuid
   not null
   default uuid_generate_v4(),
 METHOD_NAME
   text
   not null,
 DB_ROUTINE_NAME
   name
   not null,
 ENTRY_PIT
   timestamptz
   not null
   default transaction_timestamp(),
 REMARKS
   text,
 constraint CALCULATION_METHOD_PK
   primary key (ID),
 constraint CALCULATION_METHOD_UQ
   unique (DB_ROUTINE_NAME));

create or replace function METHOD_CHECK()
  returns trigger
  language plpgsql
  stable
  as
$body$
declare
V_COUNT smallint;
begin
if TG_NARGS != 1 then
raise
  exception
  using
message = 'METHOD_CHECK expects the schema name to  
be passed and nothing more! There have been passed ' ||

TG_NARGS || ' arguments.',
hint = 'Please check the trigger "' || TG_NAME ||
 '" on table "' || TG_TABLE_NAME || '" in  
schema "' ||

 TG_TABLE_SCHEMA || '".';
end if;
select COUNT(*) into V_COUNT
  from INFORMATION_SCHEMA.ROUTINES
 where ROUTINE_SCHEMA   = TG_ARGV[1]
   and ROUTINE_NAME = NEW.DB_ROUTINE_NAME;
if V_COUNT != 1 then
raise exception ' expects the schema name to be passed!';
end if;
insert into TG_TABLE_SCHEMA.TG_TABLE_NAME values (NEW.*);
return NULL;
end;
$body$;

create trigger CALCULATION_METHOD_BR_IU
  before insert on CALCULATION_METHOD
  for each row
  execute funct

Re: Possible trigger bug? function call argument literalised

2021-01-03 Thread Thiemo Kellner



Quoting Rob Sargent :





The function definition doesn’t name any parameters?





Nope, trigger functions cannot, according to documentation. Parameters  
can be passed as list/array of values though. I have no clue about why  
this needs to be so awful/awesome.


--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37





Re: Possible trigger bug? function call argument literalised

2021-01-03 Thread Thiemo Kellner



Quoting Tom Lane :


Thiemo Kellner  writes:

create trigger CALCULATION_METHOD_BR_IU
   before insert on CALCULATION_METHOD
   for each row
   execute function METHOD_CHECK(current_schema);



Executing such, the string "current_schema" gets literalised, i.e.
single quoted:


Yup, per the CREATE TRIGGER documentation [1]:

arguments

An optional comma-separated list of arguments to be provided to the
function when the trigger is executed. The arguments are literal
string constants. Simple names and numeric constants can be written
here, too, but they will all be converted to strings. Please check the
description of the implementation language of the trigger function to
find out how these arguments can be accessed within the function; it
might be different from normal function arguments.


I strongly feel this is a bug,


It's operating as designed.  There might be scope for a feature
improvement here, but it'd be a far-from-trivial task, with probably
a lot of ensuing compatibility breakage.



Oh, thanks! I did not read careful enough. I could not imagine such a  
behaviour to be intentional.


Well, I guess, I can put the correct schema at installation, but would  
have liked to have a more general approach. Furthermore, I think this  
also implies that installation can only be done by psql. :-s


--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37





Re: Possible trigger bug? function call argument literalised

2021-01-03 Thread Thiemo Kellner



Quoting Adrian Klaver :

Well, I guess, I can put the correct schema at installation, but  
would have liked to have a more general approach. Furthermore, I  
think this also implies that installation can only be done by psql.  
:-s


Why not grab the CURRENT_SCHEMA in the function?:

DECLARE
V_COUNT smallint;
C_SCHEMAvarchar := CURRENT_SCHEMA;


Thank you all for your lightning fast replies. I failed to set it at  
install time. \gset cannot be used as bind variables in SQL. So, I try  
retaining the restriction to that the function needs to be installed  
in the same schema as the triggered table.


--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37





Re: Possible trigger bug? function call argument literalised

2021-01-03 Thread Thiemo Kellner

Quoting Adrian Klaver :


Can you provide an outline form of what you are trying to accomplish?


Hm, making myself understood. ;-) So from the very beginning.

There is the concept of growing degree days  
(https://en.wikipedia.org/wiki/Growing_degree-day). It is a measure  
for energy an organism can consume in a specific day for its  
development. Let stay at plants. It is basically the daily average  
temperature. As plants do not grow below a certain, plant specific  
temperature, this base temperature gets substracted from the average.  
Usually plants grow faster the warmer it is. But only to a certain  
temperature above which the growth rate remains. However, the  
arithmetic temperature average is not the most accurate approximation,  
so there are other methods to calculate the amount of energy available  
to grow. To cut a long story short, I implemented several such methods  
as pg/plsql functions. And I implement a datamodel, where plants get  
connected to the amount of growing degree days to mature. As this  
value is method dependent, all the plant values get the method used to  
calculate it, assigned too. To prevent the assignement of imaginary  
methods, I setup foreign key relation. Unfortunately, it is not  
allowed to reference the catalog tables, so I put up a dimension  
table. In order to prevent imaginary entries there, I want to check  
the existence of the entry-to-be as installed function  
(information_schema.routines). I wanted to have a general solution for  
the check to facilitate reuse of the method_check trigger function.


--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37





Re: Possible trigger bug? function call argument literalised

2021-01-03 Thread Thiemo Kellner



Quoting Adrian Klaver :

Familiar with it, I have worked in farming(outdoor and  
indoor(greenhouse)) industries.


Cool

(https://en.wikipedia.org/wiki/Growing_degree-day). It is a measure  
for energy an organism can consume in a specific day for its  
development.


Also used to anticipate pest pressure on plants.


:-)

So if I am following you are trying to set up a dynamic FK like  
process to INFORMATION_SCHEMA.ROUTINES on INSERT to  
CALCULATION_METHOD?


Perfectly summarised.

If that is the case my previous suggestion of finding the  
CURRENT_SCHEMA inside the function would work?


Actually, I did not follow it. I decided to try to go for TG_TABLE_SCHEMA.

Personally I would create a script the built and populated  
CALCULATION_METHOD table as you added the functions to the database  
and schema. So:


BEGIN;
CREATE TABLE CALCULATION_METHOD ...

CREATE the_schema.some_dd_fnc();

INSERT INTO CALCULATION_METHOD VALUES()

--Where db_routine_name would be set to the function name.
...

COMMIT;


To me, it does not seem to have FK function. I figure, I also could

insert into CALCULATION_METHOD (DB_ROUTINE_NAME) select ROUTINE_NAME  
from INFORMATION_SCHEMA.ROUTINES;


But again, I had no FK functionality and I would have only the routine  
name. Remarks and other attributes would need to be maintained in  
extra steps.


So, I implemented a non-general solution.

create or replace function METHOD_CHECK()
  returns trigger
  language plpgsql
  volatile
  as
$body$
declare
V_COUNT smallint;
begin
select COUNT(*) into V_COUNT
  from INFORMATION_SCHEMA.ROUTINES
 where ROUTINE_SCHEMA   = TG_TABLE_SCHEMA
   and ROUTINE_NAME = NEW.DB_ROUTINE_NAME;
if V_COUNT != 1 then
raise
  exception
  using
message = 'Routine "' || NEW.DB_ROUTINE_NAME ||
'" could not be found in schema "' ||
TG_TABLE_SCHEMA || '!',
hint = 'Install the routine beforehand.';
end if;
return NEW; -- If NULL was returned, the row would get skipped!
end;
$body$;




--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37





Re: Possible trigger bug? function call argument literalised

2021-01-04 Thread Thiemo Kellner



Quoting Adrian Klaver :


On 1/3/21 1:44 PM, Thiemo Kellner wrote:



So is the below still only going to fire on INSERT?

If so it will not deal with functions that disappear after the  
INSERT, which in the end makes it similar to my suggestion:) The  
point being you are taking a snapshot in time and hoping that holds  
going forward. Of course when a calculation fails because the  
function is no longer there or has changed you will know a change  
has occurred. Is there some process to deal with the preceding?


Yes insert only, I reckon there is no way to automatically handle  
deletes of functions - unless I could install a trigger on the very  
catalogue table which I will not consider even as last resort. I also  
discarded the update because I am only interested in the presence  
check. So, if my dimension table changes some payload attribute  
values, I do not need to propagate this change anywhere. On the other  
hand, if someone changes the value of DB_ROUTINE_NAME, I better check.


It is a project of my own. There is no process defined. ;-)


So, I implemented a non-general solution.

create or replace function METHOD_CHECK()
  returns trigger
  language plpgsql
  volatile
  as
$body$
    declare
    V_COUNT smallint;
    begin
    select COUNT(*) into V_COUNT
  from INFORMATION_SCHEMA.ROUTINES
 where ROUTINE_SCHEMA   = TG_TABLE_SCHEMA
   and ROUTINE_NAME = NEW.DB_ROUTINE_NAME;
    if V_COUNT != 1 then
    raise
  exception
  using
    message = 'Routine "' || NEW.DB_ROUTINE_NAME ||
    '" could not be found in schema "' ||
    TG_TABLE_SCHEMA || '!',
    hint = 'Install the routine beforehand.';
    end if;
    return NEW; -- If NULL was returned, the row would get skipped!
    end;
$body$;





--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37





Re: Suggestion: provide a "TRUNCATE PARTITION" command

2021-01-09 Thread Thiemo Kellner

 Quoting Michael Lewis :

Still, no feedback on the effect that a truncate call is having on  
the DB and may be doing more than intended fairly easily. I am not  
in the hackers group so I couldn't say this feature would not be  
implemented. It just seems unlikely given the philosophies of that  
group.


I would not feel bad to have a more efficient option but possibly a  
more dangerous one. Projects/application could setup policies about  
what may be done in which way and what not.

 S/MIME Public Key:
https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37


smime.p7s
Description: S/MIME Signature


Re: Do we need a way to moderate mailing lists?

2021-01-17 Thread Thiemo Kellner


Also, could it be possible to make messages plain text? I see a lot  
of varying fancy fonts and I hate that. I even hate it more when  
people post messages not properly trimmed or messages that need  
formatting preserved such as select output, i.e. table data, explain  
plans, etc. Proportional fonts (Outlook with its darn Arial) is one  
of the worst...


Well, one could say that with html messages one can "force" a  
monospaced font like consolas or courier. Then again, there is no  
guarantee that the receiving end does have it installed. And on top,  
everyone is free to have her/his mail client to display plain text in  
monospaced font and is only to blame her/himself if not doing so.



And then there's people posting screen shots instead of copy/paste... :-(


+1

I think, an automatic conversion of incoming posts to plain text and  
dropping all non plain text attachments would help a lot already.


I would not do that. It is the work on the wrong end with doubtful  
result. Wouldn't it be better to reject non-plain-text postings?


While at it, is there a rule of thumb for the length of inline code -  
in comparison to attaching code files in comparison to using something  
like pastebin.com? I only found very coarse instructions on what to do  
on the lists. Have I been missing a link to a netiquette page?


Cheers Thiemo


--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37


smime.p7s
Description: S/MIME Signature


Re: How to check if a materialised view is being updated?

2021-01-19 Thread Thiemo Kellner
I’ve got a materialized view as a source for my ETL-process, and the  
materialized view takes several hours to refresh.  During which it  
is locked for queries.


Would it be an option to split the process into a cascade of  
materialized views to minimize the actual time of lock?


So I’m looking for a way to identify if the refresh process is  
finished or if it’s still running  - preferably without having to  
wait for timeout by querying a locked materialized view.  But by  
e.g. using the system tables or otherwise.


I cannot answer this, however.

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handys: +41 78 947 36 21 | +49 1578 772 37 37


smime.p7s
Description: S/MIME Signature


Re: Window function?

2022-06-04 Thread Thiemo Kellner

Hi Robert

Interesting problem. I need to think about it.

You need to figure out when Input changes. You can achieve this by using 
lead or lag (depending of the sort direction over start) 
https://www.postgresql.org/docs/current/functions-window.html .


Hope this nudges you to a solution.

Kind regards

Thiemo

Am 04.06.22 um 10:18 schrieb Robert Stanford:

Hi,

I have time series data from multiple inputs with start and
end timestamps.

Can anyone suggest an elegant way to coalesce consecutive rows so only 
the
first start time and last end time for each group of events (by input) 
is returned.


Sample from and to below where the rows for Input number 4 could be 
massaged.

(Timestamps selected with timestamp(0) for convenience)

From this:
Input   Start                 End
5       2022-06-04 09:09:00   2022-06-04 09:09:29
4       2022-06-04 09:08:50   2022-06-04 09:09:00
4       2022-06-04 09:08:10   2022-06-04 09:08:50
4       2022-06-04 09:07:47   2022-06-04 09:08:10
17      2022-06-04 09:06:47   2022-06-04 09:07:47
4       2022-06-04 09:06:37   2022-06-04 09:06:47
4       2022-06-04 09:06:29   2022-06-04 09:06:37
4       2022-06-04 09:06:17   2022-06-04 09:06:29
4       2022-06-04 09:05:53   2022-06-04 09:06:17
16      2022-06-04 09:04:33   2022-06-04 09:05:53

To this:
Input   Start                 End
5       2022-06-04 09:09:00   2022-06-04 09:09:29
4       2022-06-04 09:07:47   2022-06-04 09:09:00
17      2022-06-04 09:06:47   2022-06-04 09:07:47
4       2022-06-04 09:05:53   2022-06-04 09:06:47
16      2022-06-04 09:04:33   2022-06-04 09:05:53

Thanks in advance to anyone who can help!
Robert


Re: an difficult SQL

2022-11-05 Thread Thiemo Kellner

Hi Rafal

You first could select the three users with the most recent entries with 
a windowing function 
(https://www.postgresql.org/docs/15/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS) 
putting it into a with query 
(https://www.postgresql.org/docs/15/sql-select.html), in following with 
queries I would select 2.1 to 2.3 with each a constant column with each 
a different value you later sort by. In a next with query you can select 
all the rest (except all 
https://www.postgresql.org/docs/15/sql-select.html#SQL-EXCEPT) the 
results of 2.1 to 2.3 for 2.4 also with the notorious sort column. In a 
last with query you can put together the partial results for 2.1 to 2.4 
with a union all 
(https://www.postgresql.org/docs/15/sql-select.html#SQL-UNION) and 
selecting sort by the sort column and the timestamp in the final select.


I do not know your background, however, sql is about data sets end it is 
not always easy to get ones head around thinking in sets. I hope you 
could follow my suggestions. It might not be the most efficient way but 
should work.


Kind regards

Thiemo


Am 05.11.22 um 16:10 schrieb Rafal Pietrak:

Hi Everybody,

I was wondering if anybody here could help me cook up a query:

1. against a list of events (like an activity log in the database). 
The list is a single table: create table events (tm timestamp, user 
int, description text).


2. of which the output would be sorted in such a way, that:
2.1 most recent event would "select" most recent events of that same 
user, and displayed in a group (of say 10) of them (in "tm" order).


2.2 going through the events back in time, first event of ANOTHER user 
selects next group, where (say 10) most recent events of that OTHER 
user is presented.


2.3 next most recent event of yet another user selects yet another 
group to display and this selection process goes on, up to a maximum 
of (say 20) users/groups-of-their-events.


2.4 after that, all other events are selected in tm order.

This is to present most recent telephone activities grouped by most 
recent subscribers so that the dashboard doesn't get cluttered with 
information but allows for an overview of other activity of most 
recent users.


I tend to think, that it's a problem for a window function ... but 
I've stumbled on the problem how to limit the window "frame" to just a 
few (say 10) events within the "window" and have all the rest returned 
as "tail" of the query.


BTW: the eventlog table is big. (and partitioned).

Any help appreciated.

-R



--
Signal (Safer than WhatsApp): +49 1578 7723737
Threema (Safer than WhatsApp): A76MKH3J
Handy: +49 1578 772 37 37





Is there something wrong with my test case?

2018-12-25 Thread Thiemo Kellner

Hi all and merry Christmas

I was under the impression that updating a table with values from (an)  
other table(s) would be implemented most efficiently with a correlated  
subquery a long the schema as follows, let's name it A.


update TO_BE_UPDATED U
   set ({column list}) = ({correlated subquery 1})
 where exists ({correlated subquery 1})
;

I set up a test case to figure out if this is the case. I compared the  
execution plan of a query with above pattern with the execution plans  
with the following patterns.


B
update TO_BE_UPDATED U
   set COL_1 = ({correlated subquery 1}),
   COL_2 = ({correlated subquery 2})
 where exists ({correlated subquery 3})
;

C
update TO_BE_UPDATED U
   set COL_1 = ({correlated subquery 1}),
   COL_2 = ({correlated subquery 2})
 where U.KEY_U in ({correlated subquery 3})
;

Explain analyze verbose showed for:
A (cost=264.72..626.97 rows=31 width=90) (actual time=1.117..1.117  
rows=0 loops=1)
B (cost=264.72..974.25 rows=31 width=58) (actual time=1.508..1.508  
rows=0 loops=1)
C (cost=0.42..611.19 rows=31 width=52) (actual time=2.217..2.217  
rows=0 loops=1)


I am very surprised that the cost of A is (much) higher than that of C  
which I suspected to be the most inefficient. I was that much fixed on  
the costs that I initially ignored the actual time where my  
assumptions on efficiency are reflected. Funny though is that the  
subjective impression when waiting for the update queries to complete  
was that C was fastest by far, followed by B and only at the end was  
update A.


Now I wonder whether I do not know, how to read the explain plan  
summary, I should not trust my subjective impression with respect to  
time measurement or if my test case is faulty.


I carried out the test on an idle Windows 10 laptop with portable  
PostgreSQL 10.4 provided by PortableApps.

You can find test case script and log at https://pastebin.com/W2HsTBwi

I would appreciate your two dimes.

Kind regards

Thiemo
--
Signal: +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B





Is it impolite to dump a message

2019-01-06 Thread Thiemo Kellner

Hi all

I posted a question and did not get a reaction. Now I wonder whether  
no one took interest (no offence meant) no one has an answer or  
whether the point in time I posted was just rubbish. In the latter  
case I would be inclined to dump/repost my question but only if this  
was not impolite. Is there a policy?


Kind regards

Thiemo
--
Signal: +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B





Re: Is there something wrong with my test case?

2019-01-07 Thread Thiemo Kellner



Hi HP

Thanks for your reply.

Quoting "Peter J. Holzer" :


On 2018-12-25 11:54:11 +0000, Thiemo Kellner wrote:
[three different but functionally equivalent queries]


Explain analyze verbose showed for:
A (cost=264.72..626.97 rows=31 width=90) (actual time=1.117..1.117 rows=0
loops=1)
C (cost=0.42..611.19 rows=31 width=52) (actual time=2.217..2.217 rows=0
loops=1)


626.97 doesn't seem "much higher" to me than 611.19. I would call that
"about the same".



So would I but the cost is given as a range. Taking the the average  
somewhat 400 compare to somewhat 300. I do not know whether averaging  
is appropriate here.



This is weird. C takes almost exactly twice as long as A, and while
humans aren't very good at estimating times, One second should feel
faster than two, not slower, and certainly not slower by far. Is it
possible that your subjective impression wasn't based on the executions
you posted but on others? Caching and load spikes can cause quite large
variations in run time, so running the same query again may not take the
same time (usually the second time is faster - sometimes much faster).


I am pretty sure not to have confused the association of my felt time  
measure to the query. I did run the script several times but as the  
script create everything anew this might not have any impact caching  
wise. However, if caching had an impact it just would add to the  
discrepancy between feeling that the first statement was much faster  
than the supposedly optimal statement. Being as may, there is still  
fact that cost for A was estimated  about the same as C or much higher  
but A was executed in half of the time of C.


--
Signal: +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B





Re: Is there something wrong with my test case?

2019-01-07 Thread Thiemo Kellner

Hi David

Thanks for your revision.

Quoting David Rowley :


On Wed, 26 Dec 2018 at 00:54, Thiemo Kellner
 wrote:

Explain analyze verbose showed for:
A (cost=264.72..626.97 rows=31 width=90) (actual time=1.117..1.117
rows=0 loops=1)
B (cost=264.72..974.25 rows=31 width=58) (actual time=1.508..1.508
rows=0 loops=1)
C (cost=0.42..611.19 rows=31 width=52) (actual time=2.217..2.217
rows=0 loops=1)

I am very surprised that the cost of A is (much) higher than that of C
which I suspected to be the most inefficient. I was that much fixed on
the costs that I initially ignored the actual time where my
assumptions on efficiency are reflected. Funny though is that the
subjective impression when waiting for the update queries to complete
was that C was fastest by far, followed by B and only at the end was
update A.


While the times mentioned in "actual time" are for execution only and
don't account for the time taken to plan the query, the results you
put in [1] disagree entirely with your claim that 'C' was faster. 'A'
comes out fastest with the explain analyzes you've listed.

A:
Planning TIME: 0.423 ms
Execution TIME: 1.170 ms

C:
Planning TIME: 0.631 ms
Execution TIME: 2.281 ms

Have you confused each of the results, perhaps because they're in a
different order as to your cases above?


I am pretty sure I did not confuse. I am not worried about planning  
times as I assume that PostgreSQL has a time limit restricting the  
time used to find the best execution path in the order of seconds such  
that for a heavy load query it would get neglectable.



I'd certainly expect 'A' to be the fastest of the bunch since it's
both less effort for the planner and also the executor.  I didn't look
at why the cost is estimated to be slightly higher, but the planner
wouldn't consider rewriting the queries to one of the other cases
anyway, so it's likely not that critical that the costings are
slightly out from reality.


I am glad, that my feeling what should be the best query meets  
reality. However, I am left a bit concerned about the usefulness of  
the costs of the execution plan. I feel the costs rather contradict  
the actual execution times in my test case. To me this would render  
the cost useless for comparison of queries.



where U.KEY_U in ({correlated subquery 3})


This is not correlated in [1].

[1]  https://pastebin.com/W2HsTBwi


Right you are, my fault. Thanks for your attention there as well. :-)

--
Signal: +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B





Re: Is there something wrong with my test case?

2019-01-07 Thread Thiemo Kellner

Hi Fabio

Quoting Fabio Pardi :


The cost is not a range. The 2 numbers you see are:


  *
Estimated start-up cost. This is the time expended before the  
output phase can begin, e.g., time to do the sorting in a sort node.


  *Estimated total cost. This is stated on the assumption that  
the plan node is run to completion, i.e., all available rows are  
retrieved. In practice a node's parent node might stop short of  
reading all available rows (see the LIMIT example below).


  As you can read here:

  https://www.postgresql.org/docs/current/using-explain.html


Thanks for pointing this out. I was mislead than be the form those two  
different things are displayed.

--
Signal: +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B





Re: POSTGRES/MYSQL

2019-03-11 Thread Thiemo Kellner



Quoting Adrian Klaver :


On 3/11/19 9:31 AM, Sonam Sharma wrote:

Hi Adrian,
Ours is retail company and the DB size is Max 30gb, currently we  
are using db2.


Things to consider:

1) Migration tools for DB2 --> MySQL/Postgresql. I have not done  
this, so someone else will have to comment.


2) The clients/frameworks/ORMs you use now to connect to the  
database. Do they also work with Postgresql/MySQL?


It is also worth to consider if the architecture/model of DB2 fits  
PostgreSQL/MySQL. And while at it, how about getting rid of all the  
itching quirks of the current solution anyway? I see the danger of  
getting disappointed by any of PostgreSQL/MySQL if the current  
solution uses DB2 features that cannot be easily mapped to any of the  
contenders features.


Bottom line of my insinuation is that the migration tool could be less  
an point if you get the opportunity to overhaul your application.


Kind two dimes

Thiemo


--
Signal: +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B





Table inheritance over schema boundaries possible

2019-05-22 Thread Thiemo Kellner

Hi all

I am wondering if table inheritance is possible over the boundaries of  
schemata and different owners.


I have database act with schemata il and scd. When I issue as IL

create table IL.INHERITANCE_TEST() inherits (SCD.TEMPL_BK);

I get

[Code: 0, SQL State: 42501]  ERROR: permission denied for schema scd

Even though I granted all privileges to IL.

What am I missing or is it just not possible what I want to do?

Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Re: Table inheritance over schema boundaries possible

2019-05-23 Thread Thiemo Kellner


Quoting Achilleas Mantzios :


as of 10 (same with 11) you (IL) must be the owner of relation SCD.TEMPL_BK .
Create your tables with a user who has correct privileges on both  
tables/schemas.


Not what I hope for but was afraid of. thank you for the answer.


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Re: sequences

2019-06-24 Thread Thiemo Kellner

Hi Karl

I did not double check with the doc whether the SQL you posted is  
valid (I guess it could by applying the defaults) however I do not see  
how sequences would govern the sending of data to users.


Kind regards

Thiemo

Quoting Karl Martin Skoldebrand :


Hi,

I'm trying to troubleshoot a case where users are getting data from  
a database, despite they having said they are not interested.
There is a number of sequences in the database in question, but to  
me they look "empty", like stubs.


create sequence db_table_seq;

alter sequence db_table_seq owner to dbnameuser;

(all names obfuscated)

To me it look like the only thing going is changing the ownership of  
the sequence. For unknown reasons.

What is your impression the intention is with this?

//Martin S



Disclaimer:  This message and the information contained herein is  
proprietary and confidential and subject to the Tech Mahindra policy  
statement, you may review the policy at  
http://www.techmahindra.com/Disclaimer.html  
 externally  
http://tim.techmahindra.com/tim/disclaimer.html  
 internally within  
TechMahindra.







--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Re: Need a DB layout gui

2019-06-25 Thread Thiemo Kellner
You also could try out DBVisualizer (https://www.dbvis.com/). It is  
available in a feature reduced free version. Feautre comaprison is at  
https://www.dbvis.com/features/feature-list/. I personally bought the  
pro to support development. I rarely use a pro feature.


Quoting Zahir Lalani :

Navicat - its costs money but always found it very useful for  
managing various DBs


Z
-Original Message-
From: Tim Clarke 
Sent: 24 June 2019 23:56
To: pgsql-general@lists.postgresql.org
Subject: Re: Need a DB layout gui

I've been enjoying dbeaver for this




--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Purely declarative FKs

2023-10-16 Thread Thiemo Kellner

Hi

Please bear with me, if this is the wrong place or this is an old 
question answered over and over again.


Context: In my professional life I rarely come across 
projects/applications where there are foreign keys on the database. This 
is due to loading freedom not due to that there actually were no 
relations. And usually, there is no datamodel. Only the database objects 
have been implemented. So, they end up with a bunch/mess of tables, up 
to your choice, and relations are insider information.


I have read the PostgreSQL documentation on Foreign keys. It seems, that 
it is not possible (easily) to have inactive FKs, so one can have both. 
The relation information on the database at least but the freedom of 
load order/speed at the same time. My short research on the net spoke 
about disabling the trigger used by the FK. If that is possible, this 
would be a way but it feels underhanded to me and I would not have a 
good feeling to do it.


Question: Are there plans to provide a feature in PostgreSQL that one 
can have foreign keys for purely documentation purpose - I know, one 
could use a modelling tool and just not implement the FKs, but my 
reality is, there is hardly ever a modelling tool involved.


Kind regards

Thiemo





Re: need SQL logic to find out way's to Implement check sum to validate table data is migrated correctly

2023-10-27 Thread Thiemo Kellner

Hi

Why do you need checksums? Can you not employ a full outer join? My 
though behind this is that checksumming is quite costly cpuwise and you 
have to fiddle with each and every relevent column, be it as part of the 
checksum string be it as part of the joiner. The joiner would have the 
advantage that you could, if you wanted, also implement logic to tell 
you precisely in which columns the difference is encountered.


Kind regards

Thiemo

Am 27.10.2023 um 10:56 schrieb Y_esteembsv-forum:

Hi
Need SQL logic/different approach method's  to find out way's
a) to Implement check sum to validate table data is migrated correctly

Any guidance

Regards
Bharani SV

Rule system (and triggers)

2023-11-09 Thread Thiemo Kellner

Hi all

I am afraid, I have not understood the rule system yet.

I have got 4 tables and a view over all of them.

ladevorgaenge  kanton  tarifgruppe
0..1   \       |  /
       \  | /
        \ |    /
 |    |    |
    /|\ /|\ /|\
  tarif_progressiv

An DML onto the view tarif_progressiv_denorm is not possible because of 
the joins. So I intended to create rules to handle that. However, the do 
not behave as I expect.


INSERT INTO
    "budget"."tarif_progressiv_denorm"
    (
    "kantonscode",
    "kantonsname",
    "tarifgruppencode",
    "kinder_anzahl",
    "kirchensteuer_flag",
    "einkommen_steuerbares_range_low_boundary",
    "einkommen_steuerbares_range_high_boundary",
    "tarifschritt",
    "mindeststeuer",
    "steuersatz",
    "ladevorgaenge⠒id"
    )
    VALUES
    (
    'AG',
    null,
    'M',
    1,
    false,
    10,
    30,
    10,
    10,
    20,
    '0562b97a-87af-4071-b56d-f25b4e9bca0f'
    );

a) Unexpected not-null constraint violationfor kanton⠒id

update tarif_progressiv_denorm
   set kantonscode = 'BE'
 where kantonscode = 'AG';

update tarif_progressiv_denorm
   set kantonsname = 'Bern'
 where kantonsname = 'Aargau';

update tarif_progressiv_denorm
   set kantonsname = 'Zürich'
 where kantonscode = 'AG';

b) I thought the following would throw a not-null constraint 
violationbecause kanton⠒id must not be null.


update tarif_progressiv_denorm
   set kantonscode =  null
 where kantonscode = 'AG';

c) I noticed that, even though the rules define logic for the other 
attributes, those do not get changed if not present in an update. While 
this is actually good, but surprises me nonetheless.


Did I miss some reading in the doc? Must I use triggers instead?

Btw, I am using DbVis against PostgreSQL 16 for the queries.

Kind regards

Thiemo
drop table if exists BUDGET.TARIF_PROGRESSIV cascade;

create table
BUDGET.TARIF_PROGRESSIV (ID uuid
   default gen_random_uuid()
   constraint PK⠒TG⠒ID
 primary key,
 KANTON⠒ID  uuid
   not null
   constraint FK⠒TP⠒K⠒ID
 references BUDGET_MASTER.KANTON (ID),
 TARIFGRUPPE⠒ID uuid
   not null
   constraint FK⠒TP⠒TG⠒ID
 references BUDGET_MASTER.TARIFGRUPPE (ID),
 KINDER_ANZAHL  smallint
   not null,
 KIRCHENSTEUER_FLAG boolean
   not null,
 EINKOMMEN_STEUERBARES_RANGEnumrange
   not null,
 TARIFSCHRITT   numeric(1000, 2)
   not null,
 MINDESTSTEUER  numeric(1000, 2)
   not null,
 STEUERSATZ numeric(1000, 998)
   not null,
 LADEVORGAENGE⠒ID   uuid
   constraint FK⠒TP⠒LV⠒ID
 references BUDGET_LOG.LADEVORGAENGE (ID),
 constraint UK⠒TP⠒UQ unique (KANTON⠒ID,
 TARIFGRUPPE⠒ID,
 KINDER_ANZAHL,
 KIRCHENSTEUER_FLAG,
 
EINKOMMEN_STEUERBARES_RANGE));


comment on column BUDGET.TARIF_PROGRESSIV.ID is
  'Künstlicher Schlüssel';

comment on column BUDGET.TARIF_PROGRESSIV.KINDER_ANZAHL is
  '';

comment on column BUDGET.TARIF_PROGRESSIV.KIRCHENSTEUER_FLAG is
  'Muss Kirchensteuer eingerechnet werden';

comment on column BUDGET.TARIF_PROGRESSIV.EINKOMMEN_STEUERBARES_RANGE is
  '';

comment on column BUDGET.TARIF_PROGRESSIV.TARIFSCHRITT is
  '';

comment on column BUDGET.TARIF_PROGRESSIV.MINDESTSTEUER is
  '';

comment on column BUDGET.TARIF_PROGRESSIV.STEUERSATZ is
  '';

comment on table BUDGET.TARIF_PROGRESSIV is
  'Tarife progressiver Quellensteuer
$Id$';


commit; -- In contrast to Oracle, ddls do not commit implicitly.
drop table if exists BUDGET_LOG.LADEVORGAENGE cascade;

create table if not exists
BUDGET_LOG.LADEVORGAENGE (ID   uuid
 default gen_random_uuid()
 constraint PK⠒LV⠒ID primary key, 
  LADEVORGANG_PT   timestamp
 default current_timesta

Detection of which attributes should get set in update trigger

2023-11-10 Thread Thiemo Kellner

Hi all

I have a view, that is a join over 4 tables (the setup of the rule 
question). I want to update the central table over the view (by an 
instead-of trigger). How can I determine, whether an attribute should 
get set to NULL, "new.XYZ is null" or whether it should be left alone. 
Is there a leave-me-alone function determining, whether an attribute is 
in the set clause of an update statement. I am thinking along the line 
code in the trigger function like


update TABLE
   set XYZ = case
  when leave-me-alone (new.XYZ) then old.XYZ
  else new.XYZ
 end;

And what about the where condition... hm, guess I am a bit confused.

Kind regards

Thiemo





Re: Detection of which attributes should get set in update trigger

2023-11-10 Thread Thiemo Kellner
Thanks for the reply. I confirm the behaviour. Your explanation makes 
sense if I consider having read that with an update the is a complete 
new record version written.


Am 10.11.2023 um 14:35 schrieb David G. Johnston:
On Friday, November 10, 2023, Thiemo Kellner 
 wrote:


Hi all

I have a view, that is a join over 4 tables (the setup of the rule
question). I want to update the central table over the view (by an
instead-of trigger). How can I determine, whether an attribute
should get set to NULL, "new.XYZ is null" or whether it should be
left alone. Is there a leave-me-alone function determining,
whether an attribute is in the set clause of an update statement.
I am thinking along the line code in the trigger function like

update TABLE
   set XYZ = case
  when leave-me-alone (new.XYZ) then old.XYZ
  else new.XYZ
 end;

And what about the where condition... hm, guess I am a bit confused.


IIUC, The NEW record is complete, with every value as it should be in 
the newly saved tuple.  There is no leave-me-alone concept.  You don’t 
get to know if the column was specified or not in the update command.


David J.


Conditional compilation

2023-11-12 Thread Thiemo Kellner

Hi

Does PostgreSQL have something like Oracle's conditional compilation? 
This is sort of an if then statement that gets evaluated on 
compilation/installation time of PL/SQL code. If the condition is met, 
the code until the $END gets compiled. It is even possible to switch 
on/off parts of single statements. I suppose it is sort of a 
preprocessor that removes the code part from $IF until $END if the 
condition is not met.


Kind regards

Thiemo





Re: Conditional compilation

2023-11-12 Thread Thiemo Kellner
Thanks for pointing out. The only use case of conditional compilation, I 
can think of, is, that one can have arbitrary logging code in some 
environments without performance penalty in others, e.g. log as hell in 
dev, no logging at all in prod without the "function" having to execute 
checks for every and each logging statement put (as the check has been 
done on installation already).


begin
    do_something;
    $if check_if_env_is_dev $then
        do_some_logging;
    $end
    do_more_stuff;
end;

Am 12.11.2023 um 16:58 schrieb Tom Lane:

Ron  writes:

On 11/12/23 09:32, Thiemo Kellner wrote:

Does PostgreSQL have something like Oracle's conditional compilation? This
is sort of an if then statement that gets evaluated on
compilation/installation time of PL/SQL code. If the condition is met, the
code until the $END gets compiled. It is even possible to switch on/off
parts of single statements. I suppose it is sort of a preprocessor that
removes the code part from $IF until $END if the condition is not met.

Pl/PgSQL is an interpreted language; there is no compilation.  At
creation/installation, it just (I think) does syntax checks; it definitely
doesn't care if a table exists or not.

Yeah.  You can get at least some of the effect of this by just writing
if-statements.  The non-executed chunks of code still have to be
grammatically valid SQL, but they don't ever reach parse analysis
so nothing happens beyond minimal syntax checking.

If that's not enough, what you'll want to look at is using EXECUTE
to execute dynamically-constructed SQL.

regards, tom lane





Re: client/server versions

2023-11-21 Thread Thiemo Kellner

Hi Dick

Out of curiosity, what is the reason there is this zoo of versions. Is 
it impossible to align them to one version?


Cheers

Thiemo





Re: client/server versions

2023-11-21 Thread Thiemo Kellner

Am 21.11.2023 um 13:18 schrieb Dick Visser:


It is possible, it's just that there is no real need to at the moment, 
ao we have not spent that much effort on the topic.

At any time there will always be newer upstream versions.


Ok, fair enough, but it puzzles me that the effort shall be put into 
making a multitude of client verions working flawlessly with a multitude 
of server versions instead of holding server (and client) versions the same.






pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner

Hi

My names can contain a special character (⠒), e.g. to separate the 
donator object from the name or the name from the type. Typically, I use 
this with foreign keys, e.g. table PARENT has the surrogate key ID, the 
table CHILD would then have the column PARENT⠒ID. That way, I can use 
the underscore to segment column names like yet_another_column_name. I 
do not like camel-case in that case because names do not have to be case 
sensitive.


However, I want to create a (trigger) function to impose data 
consistency. For that purpose, I try to copy the data type of a PL/pgSQL 
variable from the base object, a view in that case. Trying so, I get the 
following error on installation of the function.


V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type := null;

Syntax error at "%" … invalid type name. If I use the actual type of the 
column, all is fine.


V⠒NODE_TYPE⠒NAME text;

Please find attached script files of objects directly involved in the 
trigger function.


Is there something, I am doing wrongly?

Kind Regards

Thiemo@echo Set up function NODE_GOOD⠒TR_B_IU_R;

set search_path = snowrunner, public;

create or replace function NODE_GOOD⠒TR_B_IU_R()
  returns trigger
  language plpgsql
  stable as
$body$
declare
-- V⠒NODE_TYPE⠒NAME text;
V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type := null;
begin
select NODE_TYPE⠒NAME into V⠒NODE_TYPE⠒NAME
  from NODE⠒V
 where 1 = 1
   and ID = new.NODE⠒ID
   and 1 = 1;
if (V⠒NODE_TYPE⠒NAME = 'Drop-off'
and new.TASK_NAME is null) then
raise exception 'A good connection to a drop-off node must have a 
task name!';
elsif (V⠒NODE_TYPE⠒NAME != 'Drop-off'
   and new.TASK_NAME is not null) then
raise exception 'A good connection to a non-drop-off node cannot 
have a task name!';
end if;
return null;
end;
$body$;


commit;
@echo Set view NODE⠒V up;

create or replace view NODE⠒V as
  select N.NAME as NODE⠒NAME,
 -- N.QUANTITY,
 T.NAME as NODE_TYPE⠒NAME,
 M.MAP⠒NAME,
 M.MAP⠒CODE,
 M.REGION⠒NAME,
 M.REGION⠒COMPLETION_PERCENTATGE,
 A.CENTRICITY⠒NAME,
 A.DIRECTION⠒CODE,
 N.DESCRIPTION as NODE⠒DESCRIPTION,
 T.DESCRIPTION as NODE_TYPE⠒DESCRIPTION,
 M.MAP⠒DESCRIPTION,
 M.REGION⠒DESCRIPTION,
 A.AREA⠒DESCRIPTION,
 A.CENTRICITY⠒DESCRIPTION,
 A.DIRECTION⠒DESCRIPTION,
 N.ID,
 N.NODE_TYPE⠒ID,
 N.MAP⠒ID,
 M.REGION⠒ID,
 N.AREA⠒ID,
 A.CENTRICITY⠒ID,
 A.DIRECTION⠒ID
from NODE N
  inner join NODE_TYPE T
  on N.NODE_TYPE⠒ID = T.ID
  inner join MAP⠒V M
  on N.MAP⠒ID = M.ID
  inner join AREA⠒V A
  on N.AREA⠒ID = A.ID;


create or replace rule NODE⠒R_I as
  on insert
  to NODE⠒V
  do instead
insert into NODE (MAP⠒ID,
  AREA⠒ID,
  NODE_TYPE⠒ID,
  NAME,
  -- QUANTITY,
  DESCRIPTION)
values (/* MAP⠒I */ case
   -- ID given
   when new.MAP⠒ID is not null then 
new.MAP⠒ID
   -- name or code and region given
   when (   new.MAP⠒CODE is not null
 or new.MAP⠒NAME is not null)
and (   new.REGION⠒ID is not null
 or new.REGION⠒NAME is not 
null) then (select ID

 from MAP⠒V

where (   MAP⠒CODE = new.MAP⠒CODE

   or MAP⠒NAME = new.MAP⠒NAME)

  and (   REGION⠒ID = new.REGION⠒ID

   or REGION⠒NAME = new.REGION⠒NAME))
   else null
end,
/* AREA⠒ID */   case
-- ID given
when new.AREA⠒ID is not null then 
new.AREA⠒ID
-- name given
   when (   new.DIRECTION⠒CODE is not 
null
 or new.DIRECTION⠒ID is not 
null)

Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner
Oh, I totally forgot to mention that I ran the scripts with DbVisualizer 
against a 16.1 (Debian 16.1-1.pgdg110+1) server using PostgreSQL JDBC 
Driver 42.6.0 .


Am 26.02.2024 um 16:51 schrieb Thiemo Kellner:

Hi

My names can contain a special character (⠒), e.g. to separate the 
donator object from the name or the name from the type. Typically, I use 
this with foreign keys, e.g. table PARENT has the surrogate key ID, the 
table CHILD would then have the column PARENT⠒ID. That way, I can use 
the underscore to segment column names like yet_another_column_name. I 
do not like camel-case in that case because names do not have to be case 
sensitive.


However, I want to create a (trigger) function to impose data 
consistency. For that purpose, I try to copy the data type of a PL/pgSQL 
variable from the base object, a view in that case. Trying so, I get the 
following error on installation of the function.


V⠒NODE_TYPE⠒NAME    NODE⠒V.NODE_TYPE⠒NAME%type := null;

Syntax error at "%" … invalid type name. If I use the actual type of the 
column, all is fine.


V⠒NODE_TYPE⠒NAME text;

Please find attached script files of objects directly involved in the 
trigger function.


Is there something, I am doing wrongly?

Kind Regards

Thiemo





Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner
Thanks for the hint and care. The install script has a set statement 
already and I now added the search_path clause to no avail. Please find 
the entire code attached and a screenshot from the error.


Am 26.02.2024 um 17:35 schrieb Tom Lane:

Thiemo Kellner  writes:

However, I want to create a (trigger) function to impose data
consistency. For that purpose, I try to copy the data type of a PL/pgSQL
variable from the base object, a view in that case. Trying so, I get the
following error on installation of the function.



V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type := null;



Syntax error at "%" … invalid type name. If I use the actual type of the
column, all is fine.


FWIW, I couldn't reproduce this with the fragmentary scripts you
provided.  I suspect the problem is not about the special characters
in the names, rather about search_path not including the NODE⠒V view.
Consider schema-qualifying the view name, or attaching a "SET
search_path" clause to the function.

regards, tom lane

snowrunner.7z
Description: application/compressed


Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner
Shame on me. My bad. It was the order of installation that did not work. 
Sorry for that. I was mislead by the error message. If an object is 
missing I would not expect an invalid type name message.


Thanks

Am 26.02.2024 um 17:53 schrieb Thiemo Kellner:
Thanks for the hint and care. The install script has a set statement 
already and I now added the search_path clause to no avail. Please find 
the entire code attached and a screenshot from the error.


Am 26.02.2024 um 17:35 schrieb Tom Lane:

Thiemo Kellner  writes:

However, I want to create a (trigger) function to impose data
consistency. For that purpose, I try to copy the data type of a PL/pgSQL
variable from the base object, a view in that case. Trying so, I get the
following error on installation of the function.



V⠒NODE_TYPE⠒NAME    NODE⠒V.NODE_TYPE⠒NAME%type := null;



Syntax error at "%" … invalid type name. If I use the actual type of the
column, all is fine.


FWIW, I couldn't reproduce this with the fragmentary scripts you
provided.  I suspect the problem is not about the special characters
in the names, rather about search_path not including the NODE⠒V view.
Consider schema-qualifying the view name, or attaching a "SET
search_path" clause to the function.

    regards, tom lane





Me again with an insert trigger problem

2024-02-27 Thread Thiemo Kellner

Hi

I am surprised that my before insert trigger function does not insert 
any rows into NODE_GOOD.


I was under the impression that the trigger function would do the insert 
with the new and possibly adapted values. In my case, to me at least, it 
is very simple. Only records of node type "Drop-off" must and are 
allowed to have a task name. If this is not given, raise an exception. 
What am I missing?


Function code:
create or replace function NODE_GOOD⠒TR_B_IU_R()
  returns trigger
  language plpgsql
  stable
  set search_path = SNOWRUNNER,
PUBLIC
  as
$body$
declare
V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type 
:= null;

begin
-- raise info ': %', ;
raise info 'new.NODE⠒ID: %', new.NODE⠒ID;
raise info 'new.TASK_NAME: %', new.TASK_NAME;
select NODE_TYPE⠒NAME into V⠒NODE_TYPE⠒NAME
  from NODE⠒V
 where 1 = 1
   and ID = new.NODE⠒ID
   and 1 = 1;
raise info 'V⠒NODE_TYPE⠒NAME: %', V⠒NODE_TYPE⠒NAME;
if (V⠒NODE_TYPE⠒NAME = 'Drop-off'
and new.TASK_NAME is null) then
raise exception 'A good connection to a drop-off node 
must have a task name!';

elsif (V⠒NODE_TYPE⠒NAME != 'Drop-off'
   and new.TASK_NAME is not null) then
raise exception 'A good connection to a non-drop-off 
node cannot have a task name!';

end if;
raise info 'Going to leave the trigger function 
"NODE_GOOD⠒TR_B_IU_R"';

return null;
end;
$body$;


The output of the important part of the install script is listed at the end.

Please find attached my code.

Kind regards

Thiemo


insert data into NODE_GOOD⠒V
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
1107cb8d-c1f1-4368-ac7b-72ac3031555a
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: Landslide 
on the Highway

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Drop-off
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
cdb25b50-e6cf-46fe-85f6-47ec72c00a22

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
59dec625-9167-4e63-9022-917e1a751206

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
b4fd810a-2065-4bcc-bd1d-49021d7ade95

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
a3459f1d-2615-4b20-946b-daca4a9e69de

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
c0069eea-0ee0-44ca-8b15-c14e59230a75

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
5917e5d2-bc16-4126-8486-6a8bedca45aa

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
86abd010-d930-4486-9a5e-1e85d8e81faa

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
b80adef3-8233-4e20-8f8e-3a5ccf04aacd

psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the 
trigger function  "NODE_GOOD⠒TR_B_IU_R"
psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NO

Re: Me again with an insert trigger problem

2024-02-27 Thread Thiemo Kellner
Thanks.

27.02.2024 19:09:50 Adrian Klaver :

> 
> On 2/27/24 9:49 AM, Thiemo Kellner wrote:
>> Hi
>> 
>> I am surprised that my before insert trigger function does not insert any 
>> rows into NODE_GOOD.
>> 
>> I was under the impression that the trigger function would do the insert 
>> with the new and possibly adapted values. In my case, to me at least, it is 
>> very simple. Only records of node type "Drop-off" must and are allowed to 
>> have a task name. If this is not given, raise an exception. What am I 
>> missing?
>> 
>> Function code:
>>     create or replace function NODE_GOOD⠒TR_B_IU_R()
>>   returns trigger
>>   language plpgsql
>>   stable
>>   set search_path = SNOWRUNNER,
>>     PUBLIC
>>   as
>>     $body$
>>     declare
>>     V⠒NODE_TYPE⠒NAME    NODE⠒V.NODE_TYPE⠒NAME%type := 
>> null;
>>     begin
>>     -- raise info ': %', ;
>>     raise info 'new.NODE⠒ID: %', new.NODE⠒ID;
>>     raise info 'new.TASK_NAME: %', new.TASK_NAME;
>>     select NODE_TYPE⠒NAME into V⠒NODE_TYPE⠒NAME
>>   from NODE⠒V
>>  where 1 = 1
>>    and ID = new.NODE⠒ID
>>    and 1 = 1;
>>     raise info 'V⠒NODE_TYPE⠒NAME: %', V⠒NODE_TYPE⠒NAME;
>>     if (    V⠒NODE_TYPE⠒NAME = 'Drop-off'
>>     and new.TASK_NAME is null) then
>>     raise exception 'A good connection to a drop-off node must 
>> have a task name!';
>>     elsif (    V⠒NODE_TYPE⠒NAME != 'Drop-off'
>>    and new.TASK_NAME is not null) then
>>     raise exception 'A good connection to a non-drop-off node 
>> cannot have a task name!';
>>     end if;
>>     raise info 'Going to leave the trigger function 
>> "NODE_GOOD⠒TR_B_IU_R"';
>>     return null;
> 
> Assuming this is row level trigger and run BEFORE:
> 
> https://www.postgresql.org/docs/current/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER
> 
> "
> 
> Row-level triggers fired *BEFORE* can return null to signal the trigger 
> manager to skip the rest of the operation for this row (i.e., subsequent 
> triggers are not fired, and the *INSERT*/*UPDATE*/*DELETE* does not occur for 
> this row). If a nonnull value is returned then the operation proceeds with 
> that row value.
> 
> "
> 
> 
> 
> 
>>     end;
>>     $body$;
>> 
>> 
>> The output of the important part of the install script is listed at the end.
>> 
>> Please find attached my code.
>> 
>> Kind regards
>> 
>> Thiemo
>> 
>> 
>> insert data into NODE_GOOD⠒V
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
>> 1107cb8d-c1f1-4368-ac7b-72ac3031555a
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: Landslide on 
>> the Highway
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Drop-off
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the trigger 
>> function  "NODE_GOOD⠒TR_B_IU_R"
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
>> cdb25b50-e6cf-46fe-85f6-47ec72c00a22
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the trigger 
>> function  "NODE_GOOD⠒TR_B_IU_R"
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
>> 59dec625-9167-4e63-9022-917e1a751206
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the trigger 
>> function  "NODE_GOOD⠒TR_B_IU_R"
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
>> b4fd810a-2065-4bcc-bd1d-49021d7ade95
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  V⠒NODE_TYPE⠒NAME: Pick-up
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  Going to leave the trigger 
>> function  "NODE_GOOD⠒TR_B_IU_R"
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.NODE⠒ID: 
>> a3459f1d-2615-4b20-946b-daca4a9e69de
>> psql:insert_data/NODE_GOOD⠒V.pg_sql:27: INFO:  new.TASK_NAME: 
>> psql:ins

Re: Me again with an insert trigger problem

2024-02-27 Thread Thiemo Kellner

Am 27.02.2024 um 21:42 schrieb Adrian Klaver:

Also not sure what this:

select NODE_TYPE⠒NAME into V⠒NODE_TYPE⠒NAME
   from NODE⠒V
  where 1 = 1
    and ID = new.NODE⠒ID
    and 1 = 1;

is supposed to be doing especially the 1 = 1 tests?


The select retrieves the type of the node in order to determine whether 
a task name must be given or not.


It is a habit of mine to pad conditions in the where clause. This way, 
it is easy to comment/uncomment parts of the clause for testing 
purposes. Coming from Oracle, I missed that using "true" is also 
possible and better because clearer.



Seems  '... where  ID = new.NODE⠒ID ...' is sufficient.


Right. Semantically sufficient.



Also what do you want to return as NEW?


The unchanged new record. And it works as intended.

Thanks for your help!




Re: Me again with an insert trigger problem

2024-02-27 Thread Thiemo Kellner



Am 27.02.2024 um 23:20 schrieb Adrian Klaver:

On 2/27/24 14:11, Thiemo Kellner wrote:
It is a habit of mine to pad conditions in the where clause. This way, 
it is easy to comment/uncomment parts of the clause for testing 
purposes. Coming from Oracle, I missed that using "true" is also 
possible and better because clearer.



create table true_test(id integer);
insert into true_test select * from generate_series(1, 1);

select count(*) from true_test where true;
  count
---
  1
(1 row)

select count(*) from true_test where id < 100 and true;
  count
---
     99


I am not sure, what you want me to show with your test case. And I am 
not sure whether I could not make myself clear. Please bear with me if I 
try to make things clearer with an example.


-- bit-harder-to-test-statement
select count(*)
  from TRUE_TEST
 where ID < 100  -- if I want to deactivate that part of the clause, I 
have to rewrite

   and mod(ID, 5) = 0;

-- bit-easier-to-test-statement
select count(*)
  from TRUE_TEST
 where true
   and ID < 100  -- if I want to deactivate that part of the clause, I 
just comment it out

   and mod(ID, 5) = 0
   and true;

Cheers




select results on pg_class incomplete

2024-03-14 Thread Thiemo Kellner

Hi

I am trying to access PostgreSQL meta data, possibly in a vane attempt 
to get size data.


I use DbVis with a connection as shown in https://ibb.co/2SDzhXt . I try 
to get information on a regular table "umsaetze". When doing the DbVis 
object I can see them - https://ibb.co/WxMnY2c . If I execute following 
SQL query in DbVis's SQL Commander, the result set is empty - 
https://ibb.co/GngdWLH .


select *
  from PG_CLASS
 where RELNAME = 'umsaetze';

I noticed that the sessions producing the different results are not the 
same - https://ibb.co/wdKcCFc , but seem to connect to different 
databases. The "missing" table is indeed in the budget database.


The connection user is, apart from being member of pg_monitor vanilla - 
https://ibb.co/DGs6sQz and https://ibb.co/8xzHrvP .


It seems, that in pg_class only is, with respect to custom databases, 
listed what is in the database one connects to - https://ibb.co/dbbJVbJ.


template1=> select count(*)
  from PG_CLASS
 where RELNAME = 'umsaetze';
 count
---
 0
(1 row)

template1=> \q

C:\Users\thiemo\AppData\Roaming\MOBAXT~1\home>psql -h hp-slimline-260 -p 
5436 -U monitor budget

psql (11.2, server 16.1 (Debian 16.1-1.pgdg110+1))
WARNING: psql major version 11, server major version 16.
 Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, 
bits: 256, compression: off)

Type "help" for help.

budget=> select count(*)
  from PG_CLASS
 where RELNAME = 'umsaetze';
 count
---
 2
(1 row)

budget=> \q


Is there a possibility to make the user monitor see all the objects of 
the cluster? Background is that I was hoping to create a query to spit 
out the size of tables in the cluster.


Kind regards

Thiemo




Re: select results on pg_class incomplete

2024-03-14 Thread Thiemo Kellner
Thanks for the enlightenment. A pity. I suppose, there is no working 
around this?


Am 14.03.2024 um 18:01 schrieb Adrian Klaver:

On 3/14/24 09:41, Thiemo Kellner wrote:

Hi

I am trying to access PostgreSQL meta data, possibly in a vane attempt 
to get size data.


I use DbVis with a connection as shown in https://ibb.co/2SDzhXt . I 
try to get information on a regular table "umsaetze". When doing the 
DbVis object I can see them - https://ibb.co/WxMnY2c . If I execute 
following SQL query in DbVis's SQL Commander, the result set is empty 
- https://ibb.co/GngdWLH .


select *
   from PG_CLASS
  where RELNAME = 'umsaetze';

I noticed that the sessions producing the different results are not 
the same - https://ibb.co/wdKcCFc , but seem to connect to different 
databases. The "missing" table is indeed in the budget database.


The connection user is, apart from being member of pg_monitor vanilla 
- https://ibb.co/DGs6sQz and https://ibb.co/8xzHrvP .


It seems, that in pg_class only is, with respect to custom databases, 
listed what is in the database one connects to - https://ibb.co/dbbJVbJ.


As listed on the tin:

https://www.postgresql.org/docs/current/catalogs-overview.html

"Most system catalogs are copied from the template database during 
database creation and are thereafter database-specific. A few catalogs 
are physically shared across all databases in a cluster; these are noted 
in the descriptions of the individual catalogs."


pg_class is not one of the global tables.



template1=> select count(*)
   from PG_CLASS
  where RELNAME = 'umsaetze';
  count
---
  0
(1 row)

template1=> \q

C:\Users\thiemo\AppData\Roaming\MOBAXT~1\home>psql -h hp-slimline-260 
-p 5436 -U monitor budget

psql (11.2, server 16.1 (Debian 16.1-1.pgdg110+1))
WARNING: psql major version 11, server major version 16.
  Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: 
ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)

Type "help" for help.

budget=> select count(*)
   from PG_CLASS
  where RELNAME = 'umsaetze';
  count
---
  2
(1 row)

budget=> \q


Is there a possibility to make the user monitor see all the objects of 
the cluster? Background is that I was hoping to create a query to spit 
out the size of tables in the cluster.


Kind regards

Thiemo









Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner




Am 14.03.2024 um 21:03 schrieb David Rowley:

Yeah, maybe dblink and a LATERAL join might be an easy way. Something like:

create extension dblink;
select d.datname,c.relname from pg_database d, lateral (select * from
dblink('dbname='||d.datname,$$select relname from pg_class where
relname = 'pg_class';$$) c(relname text)) c
(relname) where d.datallowconn;
   datname   | relname
+--
  postgres   | pg_class
  template1  | pg_class
  regression | pg_class
(3 rows)


Thanks for the ideas. As I would want to keep it in the database, dblink 
would be the way to go. Maybe, I will create a prodedure that creates a 
view in the monitor schema accessing the respective databases with union 
all to concatenate the data.





Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner

You solve a problem that no one has. Data belonging together may still be 
divided into schemas in a database. Thus, the metadata is also reported and 
archived individually per database.


I am not sure, we are taking about the same problem, but would be 
surprised to be the only one having experienced filling disks. Maybe, I 
am just that old already that disk space has become so cheep, the 
problem does not exist any longer.


With respect to metadata and databases: The point is not that I cannot 
see the tables in another schema (I believe, did not check yet), but in 
other databases. While this actually does not matter much, I still hold 
it true that a disk getting filled up does not care in which database or 
schema a explosively growing table resides. So, if I have a disk getting 
filled up, I would like to get easily information on the problematic 
structures in one go. With PostgreSQL this does not seem to be possible 
out of the box. I now can query each database separately, or I can 
create auxiliary structures like dblink and views to accommodate for a 
"single" query solution. My two dimes.





Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner




You could also create a PostgreSQL foreign server for each of the other 
databases, which would let you issue a query to UNION together the results of a 
query on all of the catalogs.  This would require creating a foreign table for 
pg_class in the other databases.


Thanks. So many possibilities. I am more familiar with dblinks than 
foreign tables. I will read into it.





Re: select results on pg_class incomplete

2024-03-15 Thread Thiemo Kellner
https://wiki.postgresql.org/wiki/Monitoring 


Thanks for the URL. I am not too keen to re-invent the wheel. Although 
it teaches me on PostgreSQL.





Re: PostgreSQL as advanced job queuing system

2024-03-22 Thread Thiemo Kellner




Am 22.03.2024 um 14:15 schrieb Fred Habash:
We developed a home-grown queue system using Postgres, but its 
performance was largely hindered by que tables bloating and the need to 
continuously vacuum them. It did not scale whatsoever. With some 
workarounds, we ended up designing three sets of queue tables, switching 
between them based on some queue stats, vacuum the inactive set, and repeat.
We kept this queue system for low SLA app components. For others, we 
switched to Kafka. Investing in learning and implementing purpose built 
queue systems pays off for the long term.


I wonder whether one should https://youtu.be/VEWXmdjzIpQ&t=543 not to 
scale either.





Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner

Am 24.03.2024 um 15:54 schrieb Erik Wienhold:


This is required by the SQL standard: columns of a primary key must be
NOT NULL.  Postgres automatically adds the missing NOT NULL constraints
when defining a primary key.  You can verify that with \d test1 in psql.


To me, this behaviour, while correct, is not too concise. I wished, that 
PG issued a warning about a definition conflict. In PostgreSQL, a PK 
must always be not nullable, so explicitly defining on of a PK's columns 
as nullable is contradictory, one should get notified of.


The two dimes of Thiemo




Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner



Am 24.03.2024 um 16:17 schrieb Tom Lane:


To do that, we'd have to remember that you'd said NULL, which we
don't: the word is just discarded as a noise clause.  Considering
that this usage of NULL isn't even permitted by the SQL standard,
that seems like a bit too much work.


If I understood correctly, only the NOT NULL expression gets remembered, 
but the NULL gets discarded. No, I do not quite get it. Somehow, it has 
to be decided whether to create a "check constraint" or not, but this 
information is not available any more when creating the primary key? Not 
even in some kind of intermediary catalogue?


"Considering that this usage of NULL isn't even permitted by the SQL 
standard" is in my opinion a strange argument. To me, it is similar as 
to say, well a column has a not null constraint and that must be enough, 
we do not check whether the data complies when inserting or updating. 
Sure, my example has lots more side effect than silently do the right thing.


Please do not get me wrong. I can totally understand that something 
needs to much work to implement. I am just puzzled.





Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner




Am 24.03.2024 um 16:36 schrieb Andreas Kretschmer:
the null-able constraint addition to a column is pointless because by 
default all columns are nullable. definition as a primary key adds the 
not null constraint.


While this is certainly true, I do not see why the information that a 
not null constraint is to be created or has been created is not available.





Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner




Am 24.03.2024 um 16:39 schrieb Erik Wienhold:

And that's also possible in Postgres with UNIQUE constraints if you're
looking for that behavior.


Sort of the distinction between PK and UQ.





Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner




Am 24.03.2024 um 16:35 schrieb sud:
On Sun, Mar 24, 2024 at 8:47 PM Tom Lane > wrote:
Do you specifically mean that 'null'  keyword is just not making any 
sense here in postgres. But even if that is the case , i tried inserting 
nothing (hoping "nothing" is "null" in true sense), but then too it 
failed in the first statement while inserting which is fine as per the PK.


To the best of my knowledge, your assumption is correct. And therefore 
the insert must fail because a pk never must contain null values.


But don't you think,in the first place it shouldn't have been allowed to 
create the table with one of the composite PK columns being defined as 
NULL. And then , while inserting the null record, it should say that the 
PK constraint is violated but not the "not null constraint" violated.


CREATE TABLE test1
(
c1 numeric   NULL ,
c2 varchar(36)  NOT NULL ,
CONSTRAINT test1_PK PRIMARY KEY (c1,c2)
) ;

insert into test1(c2) values('123');

/*ERROR: null value in column "c1" of relation "test1" violates not-null 
constraint DETAIL: Failing row contains (null, 123).*/


I feel largely the same way. The definition is contradictory but there 
is no message to tell you so.





Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner




Am 24.03.2024 um 16:44 schrieb Andreas Kretschmer:

postgres=# create table bla(i int null primary key);
CREATE TABLE
postgres=# \d bla
     Table "public.bla"
  Column |  Type   | Collation | Nullable | Default
+-+---+--+-
  i  | integer |   | not null |
Indexes:
     "bla_pkey" PRIMARY KEY, btree (i)

postgres=# drop table bla;
DROP TABLE
postgres=# create table bla(i int not null primary key);
CREATE TABLE
postgres=# \d bla
     Table "public.bla"
  Column |  Type   | Collation | Nullable | Default
+-+---+--+-
  i  | integer |   | not null |
Indexes:
     "bla_pkey" PRIMARY KEY, btree (i)

postgres=#


as you can see, there is no difference.  the PK-Constraint is the 
important thing here.


This describes the END state perfectly. But while creating the table, 
that is the question.


I am thinking along the lines that a table is being created by "first" 
(1) the columns in their default state. That is, Nullable would be true. 
And after that (2), all the constraints get created. Because the not 
null constraint is not present in the column definition, there is no 
change. After that (3), the primary gets created, requiring an 
additional not null constraint. Assuming such a creation would lead to 
an error when one already exists, I suppose there is a check on the 
presence for the constraint.
If (2) and (3) is swapped, then in the step creating the not null 
constraint, one had to go through ALL the column definitions to retrieve 
on which one such a constraint is defined. At this point, one also could 
check whether the nullability of a column that has already been created 
is the one as defined, being explicitly using "null"/"not null" or the 
default.





Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner




Am 24.03.2024 um 17:15 schrieb Christophe Pettus:

I think the point is that it's not really doing anything "silently."  You are 
asking for a PRIMARY KEY constraint on a column, and it's giving it to you.  One of the 
effects (not even really a side-effect) of that request is that the column is then 
declared NOT NULL.


But don't you also request the database to have the column being 
nullable? So, PG, at this point silently prioritises the request for the 
PK over the request of the nullability. Does it not?




The reason it doesn't give you a warning is that by the time it would be in a 
position to, it's forgotten that you explicitly said NULL.


How can that be forgotten? This information ends up in the data 
catalogue eventually!



It does see that the column in nullable, but that in itself isn't worth 
emitting a warning over, since you are explicitly telling it that now the 
column shouldn't be null.


I would agree if you had two separate statements there, but in the 
example it were not two different statements but one single 
contradictory statement.



It wouldn't make much more sense to emit a warning there than it would be in 
this situation:

CREATE TABLE t (i int NULL);
ALTER TABLE t ALTER i SET NOT NULL;


Again, these are two separate statements.

Maybe an example can help.

You are describing the situation when one goes to a car salesman and 
orders a car painted in blue. The car gets manufactured and the salesman 
hands you over the key. Then you say to the salesman. Now, please, 
re-paint it in red.


The issue however arose, because the statement said. "Please order me a 
blue car painted in red." Hopefully, any single salesman should respond 
with something like. "Dear customer, all very well, but it contradictory 
to have a blue car painted in red. Do you want a red car or a blue one?"



Dunkel war's, der Mond schien helle,
Als ein Wagen blitze schnelle,
langsam um die runde Ecke fuhr…




Re: Is this a buggy behavior?

2024-03-24 Thread Thiemo Kellner

Am 24.03.2024 um 17:43 schrieb Christophe Pettus:

The situation is much more like the customer saying, "I understand that the standard 
paint for this car is red, but I wish it painted blue instead."


Not in the least. Declaring the column to be NULL is explicitly 
requesting the car be blue. And declaring, in the same statement, there 
be a pk on that column is implicitly requesting the car be red.



Again, you can argue that PostgreSQL should remember that you explicitly asked 
for a NULL and generate a warning in that case, but that's not a trivial amount 
of work, since right now, that NULL is thrown away very early in statement 
processing.


Only, if PG is not aware of being in the process of creating a table.

Well, I do not feel, I can make myself understood.




Empty materialized view

2024-03-24 Thread Thiemo Kellner

Hi

I have created a materialized view with "with data". And I refreshed it 
with "with data". The query of the mv returns records when executed 
outside the mv. I would appreciate help with respect to what I miss that 
my mv is empty. You might want to have a look at the code attached.


Kind regards

Thiemo

snowrunner.7z
Description: application/compressed


Re: Empty materialized view

2024-03-24 Thread Thiemo Kellner



Am 24.03.2024 um 20:56 schrieb Erik Wienhold:

Maybe you executed REFRESH in a transaction but did not commit it?


While I can see the point for the refresh (but there actually is a 
commit), I cannot hold it valid for a create with data when the mv 
actually is created (confirmed by being empty).



I can't find any materialized view in your archive.


Oh sh*. That is the file, I forgot to commit. Please find it attached now.\echo Set materialised view QUERY_PER_TASK⠒MV up

drop materialized view if exists QUERY_PER_TASK⠒MV;

create materialized view QUERY_PER_TASK⠒MV as
   select TDP.TOP_LEVEL_TASK_TYPE⠒NAME,
  TDP.TOP_LEVEL_TASK⠒NAME,
  TDP.TASK_TREE⠒HIGHEST_PRIORITY,
  TDP.TASK⠒NAME,
  TDP.TASK_TYPE⠒NAME,
  TDP.TASK⠒PRIORITY,
  TDP.TASK⠒DESCRIPTION,
  TDP.GOOD⠒NAME,
  TDP.GOOD_4_THIS_TASK,
  TDP.QUANTITY⠒DROP_OFF,
  TDP.QUANTITY⠒NEEDED_4_THIS_REGION,
  TDP.QUANTITY⠒NEEDED_4_THIS_TASK_TREE,
  TDP.QUANTITY⠒PICK_UP,
  TDP.QUANTITY⠒READY_4_THIS_REGION,
  TDP.GOOD⠒SIZE,
  TDP.GOOD⠒SIZE * TDP.QUANTITY⠒DROP_OFF
  as NEEDED_TRANSPORT_CAPACITY_4_THIS_TASK,
  TDP.NEEDED_TRANSPORT_CAPACITY_4_THIS_REGION,
  TDP.NEEDED_TRANSPORT_CAPACITY_4_THIS_TASK_TREE,
  case TDP.NODE_TYPE⠒NAME⠒PICK_UP
  when 'Crafting zone' then R.GOOD⠒RESOURCE⠒NAME
  else null
  end as GOOD⠒RESOURCE⠒NAME,
  P.QUANTITY⠒READY_4_THIS_REGION
  as RESOURCE_QUANTITY⠒READY_4_THIS_REGION,
  TDP.NODE⠒MAP⠒NAME⠒DROP_OFF,
  TDP.NODE⠒NAME⠒DROP_OFF,
  TDP.NODE⠒MAP⠒NAME⠒PICK_UP,
  TDP.NODE⠒NAME⠒PICK_UP,
  P.NODE⠒MAP⠒NAME as NODE⠒MAP⠒NAME⠒PICK_UP_RESOURCE,
  P.NODE⠒NAME as NODE⠒NAME⠒PICK_UP_RESOURCE,
  TDP.DIRECTION⠒CODE⠒DROP_OFF,
  TDP.CENTRICITY⠒NAME⠒DROP_OFF,
  TDP.DIRECTION⠒CODE⠒PICK_UP,
  TDP.CENTRICITY⠒NAME⠒PICK_UP,
  TDP.NODE⠒DESCRIPTION⠒DROP_OFF,
  TDP.PICK_UP⠒DESCRIPTION,
  TDP.NODE⠒DESCRIPTION⠒PICK_UP,
  TDP.DIRECTION⠒DESCRIPTION⠒DROP_OFF,
  TDP.CENTRICITY⠒DESCRIPTION⠒DROP_OFF,
  TDP.DIRECTION⠒DESCRIPTION⠒PICK_UP,
  TDP.CENTRICITY⠒DESCRIPTION⠒PICK_UP,
  P.DIRECTION⠒DESCRIPTION
  as DIRECTION⠒DESCRIPTION⠒PICK_UP_RESOURCE,
  P.CENTRICITY⠒DESCRIPTION
  as 
CENTRICITY⠒DESCRIPTION⠒PICK_UP_RESOURCE,
  TDP.GOOD⠒DESCRIPTION,
  TDP.GOOD⠒WEIGHT,
  TDP.PATH,
  TDP.REGION⠒NAME,
  TDP.TOP_LEVEL_TASK⠒PRIORITY,
  TDP.REGION⠒ID,
  TDP.TASK_PREDECESSOR⠒ID,
  TDP.TASK⠒ID,
  TDP.LEVEL,
  TDP.GOOD⠒ID
 from SNOWRUNNER.TASK_HIERARCHY_DROP_OFF_PICK_UP_AVAIL⠒V TDP
  left outer join SNOWRUNNER.PRODUCTION⠒V R
   on 1 = 1
  and TDP.GOOD⠒ID = R.GOOD⠒PRODUCT⠒ID
  and TDP.REGION⠒ID = R.REGION⠒ID
  left outer join SNOWRUNNER.PICK_UP⠒V P
   on 1 = 1
  and R.GOOD⠒RESOURCE⠒ID = P.GOOD⠒ID
  and R.REGION⠒ID = P.REGION⠒ID
where 1 = 1
 order by TDP.REGION⠒NAME asc,
  TASK_TREE⠒HIGHEST_PRIORITY asc,
  NODE⠒MAP⠒NAME⠒DROP_OFF asc,
  NODE⠒MAP⠒NAME⠒PICK_UP asc,
  NODE⠒MAP⠒NAME⠒PICK_UP asc,
  NODE⠒MAP⠒NAME⠒PICK_UP_RESOURCE asc,
  NODE⠒NAME⠒PICK_UP_RESOURCE asc,
  GOOD⠒NAME asc,
  NODE⠒MAP⠒NAME⠒DROP_OFF asc,
  PATH asc
  with data;

refresh materialized view QUERY_PER_TASK⠒MV
  with data;


comment on materialized view QUERY_PER_TASK⠒MV is
  '
$Header$';


commit;


Re: Empty materialized view

2024-03-24 Thread Thiemo Kellner

Am 24.03.2024 um 21:30 schrieb Adrian Klaver:

On 3/24/24 13:11, Thiemo Kellner wrote:
Confirmed in the same session that created it or in a different session?


Different session, not knowing what that mattered.

Excerpt of the installation protocol:
…
## tenth level ##
Set materialised view QUERY_PER_TASK⠒MV up
psql:views/QUERY_PER_TASK⠒MV.pg_sql:3: HINWEIS:  materialisierte Sicht 
»query_per_task⠒mv« existiert nicht, wird übersprungen

DROP MATERIALIZED VIEW
SELECT 0
REFRESH MATERIALIZED VIEW
COMMENT
COMMIT
# insert data #
…

Check was done by DbVisualizer.

I was not able to create a test case. I tried

drop table if exists TEST_T cascade;
create table TEST_T (ID smallint);
insert into TEST_T (ID) values (1);
commit;

drop materialized view if exists TEST_MV_ON_TABLE;
create materialized view TEST_MV_ON_TABLE as select * from TEST_T with 
data; -- on table

commit;

select * from TEST_MV_ON_TABLE;

commit;
create or replace view VIEW_LEVEL_1 as select * from TEST_T;
create or replace view VIEW_LEVEL_2 as select v.id from VIEW_LEVEL_1 v 
cross join TEST_T;
create or replace view VIEW_LEVEL_3 as select v.id from VIEW_LEVEL_2 v 
cross join VIEW_LEVEL_1;
create or replace view VIEW_LEVEL_4 as select v.id from VIEW_LEVEL_3 v 
cross join VIEW_LEVEL_2;
create or replace view VIEW_LEVEL_5 as select v.id from VIEW_LEVEL_4 v 
cross join VIEW_LEVEL_3;
create or replace view VIEW_LEVEL_6 as select v.id from VIEW_LEVEL_5 v 
cross join VIEW_LEVEL_4;
create or replace view VIEW_LEVEL_7 as select v.id from VIEW_LEVEL_6 v 
cross join VIEW_LEVEL_5;
create or replace view VIEW_LEVEL_8 as select v.id from VIEW_LEVEL_7 v 
cross join VIEW_LEVEL_6;
create or replace view VIEW_LEVEL_9 as select v.id from VIEW_LEVEL_8 v 
cross join VIEW_LEVEL_7;
create or replace view VIEW_LEVEL_10 as select v.id from VIEW_LEVEL_9 v 
cross join VIEW_LEVEL_8;

commit;

drop materialized view if exists TEST_MV_ON_VIEWS;
create materialized view TEST_MV_ON_VIEWS as select * from VIEW_LEVEL_10 
with data; -- on views

commit;

select * from TEST_MV_ON_VIEWS;



But this works as expected.

Ok, I just added the following to my install script and there the data 
is visible.


select count(*) from SNOWRUNNER.TASK_DEPENDENCY⠒V;







Re: Empty materialized view

2024-03-24 Thread Thiemo Kellner

Am 24.03.2024 um 21:50 schrieb Adrian Klaver:

On 3/24/24 13:36, Thiemo Kellner wrote:
It does depending on the order of viewing. Namely if you viewed the 
'old' empty MV in the outside session before you dropped/created the 
'new' MV and committed the changes.


Something like the viewing session is in a transaction before the 
(re-)creation of the mv?


The view session is on auto commit. (It's sole purpose to query stuff 
and not to have explicitly terminate transactions do to syntax errors 
and so on.)




Excerpt of the installation protocol:
…
## tenth level ##
Set materialised view QUERY_PER_TASK⠒MV up
psql:views/QUERY_PER_TASK⠒MV.pg_sql:3: HINWEIS:  materialisierte Sicht 
»query_per_task⠒mv« existiert nicht, wird übersprungen

DROP MATERIALIZED VIEW
SELECT 0
REFRESH MATERIALIZED VIEW
COMMENT
COMMIT
# insert data #
…





select count(*) from SNOWRUNNER.TASK_DEPENDENCY⠒V;


That is not the view you showed in your attached SQL in your previous 
post nor what is mentioned above. Also if I am following your naming 
scheme it is a regular view not a materialized view.


Feeling quite dumb now. But then, there neither is data visible in the 
install session.


insert data into TASK_DEPENDENCY⠒V
INSERT 0 34
COMMIT
 count
---
 0
(1 row)


Thanks for taking care.




  1   2   >