Re: Creating complex track changes database - challenge!

2018-02-26 Thread Łukasz Jarych
Thank you geoff!

I think that i will test http://www.liquibase.org/ this one.

what about setting up trigger to metadata (structural table) to find if
column was added for example?

Best,
Jacek

2018-02-26 16:43 GMT+01:00 geoff hoffman :

>
>
> There’s https://flywaydb.org/
> and http://www.liquibase.org/
>
> More: https://dbmstools.com/version-control-tools
>
> Also, if you know PHP, Laravel database migrations have worked great for
> us!
> https://laravel.com/docs/5.6/migrations
>
>
>
> On Feb 26, 2018, at 3:44 AM, Łukasz Jarych  wrote:
>
> i would like to ask you for help with track changes to my database.
> I am new to PosgtreeSQL but i have to learn it quickly because of my boss.
>
> I have to:
>
> 1. Keep all changes within table including:
> -adding rows
> -deleting
> -editing
>
> 2. Save table with specific state and recover specific state (so go back
> to previous table versions) including comparing tables.
>
> 3. Track all DLL and DML changes with possibility to ho back to previous
> version.
>
>
>


psql '\copy to' and unicode escapes

2018-02-26 Thread Steven Hirsch
I fear that I'm missing something very obvious, but I cannot find a syntax
that permits me to use an escaped hexadecimal representation in a CSV file
and have that representation interpreted as the equivalent unicode
character when inserting into the database.  Both client and server are
using UTF8 encoding.

For example, trying to insert the 'degree' symbol, I've tried:

U&"\00b0"
E'\00b0'
"\u00b0"

In all cases, I simply get the literal string in the table, not the desired
unicode character.

If I use them in an 'INSERT' statement, it works properly.  The problem is
almost certainly between the chair and the keyboard, but what am I
misunderstanding?


Re: psql '\copy to' and unicode escapes

2018-02-26 Thread David G. Johnston
On Mon, Feb 26, 2018 at 9:53 AM, Steven Hirsch  wrote:

> I fear that I'm missing something very obvious, but I cannot find a syntax
> that permits me to use an escaped hexadecimal representation in a CSV file
> and have that representation interpreted as the equivalent unicode
> character when inserting into the database.
>

​There isn't one - copy treats input as literals and performs basically no
processing on them.​  The system writing the csv file would have to
actually encode the UTF-8 symbol, not the string of the code point,
directly into the document (i.e., a capable viewer would display whatever
00b0 is on-screen, or a placeholder if it is a non-printable character).

INSERT and COPY are two totally different animals:

INSERT INTO tbl (t) VALUES (trim('   jdjd   ')); -- stores jdjd, but
putting trim('   jdjd   ') in a csv file and you would store "trim('
 jdjd')"

David J.


Re: Creating complex track changes database - challenge!

2018-02-26 Thread geoff hoffman
I would personally do that separately: write a bash script & cron job that does 
a schema dump every hour, and (if there are any changes) commits any changes to 
your schema repository; then you can use Github or Bitbucket web hooks to do 
stuff with the changeset when it’s pushed. 

https://stackoverflow.com/questions/3878624/how-do-i-programmatically-determine-if-there-are-uncommitted-changes
 

https://stackoverflow.com/questions/24772591/check-if-git-has-changes-programmatically
 

 



> On Feb 26, 2018, at 9:36 AM, Łukasz Jarych  wrote:
> 
> Thank you geoff! 
> 
> I think that i will test http://www.liquibase.org/ 
>  this one. 
> 
> what about setting up trigger to metadata (structural table) to find if 
> column was added for example? 
> 
> Best,
> Jacek
> 
> 2018-02-26 16:43 GMT+01:00 geoff hoffman  >:
> 
> 
> There’s https://flywaydb.org/ 
> and http://www.liquibase.org/ 
> 
> More: https://dbmstools.com/version-control-tools 
>  
> 
> Also, if you know PHP, Laravel database migrations have worked great for us!
> https://laravel.com/docs/5.6/migrations 
> 
> 
> 
> 
>> On Feb 26, 2018, at 3:44 AM, Łukasz Jarych > > wrote:
>> 
>> i would like to ask you for help with track changes to my database. 
>> I am new to PosgtreeSQL but i have to learn it quickly because of my boss. 
>> 
>> I have to:
>> 
>> 1. Keep all changes within table including:
>> -adding rows
>> -deleting
>> -editing
>> 
>> 2. Save table with specific state and recover specific state (so go back to 
>> previous table versions) including comparing tables.
>> 
>> 3. Track all DLL and DML changes with possibility to ho back to previous 
>> version. 
>> 
> 
> 



system catalog permissions

2018-02-26 Thread PropAAS DBA

All;


We have a client which is segmenting their multi-tenant cluster 
(PostgreSQL 9.6) by schema, however if one of their clients connects via 
pgadmin they see ALL schemas, even the ones they don't have access to 
read. I assume pgadmin is pulling the list from the system catalogs.



What's the right/best practice approach? revoke all from public on 
specific system catalog tables? Which tables?



Thanks in advance




Re: system catalog permissions

2018-02-26 Thread Joshua D. Drake

On 02/26/2018 03:11 PM, PropAAS DBA wrote:

All;


We have a client which is segmenting their multi-tenant cluster 
(PostgreSQL 9.6) by schema, however if one of their clients connects 
via pgadmin they see ALL schemas, even the ones they don't have access 
to read. I assume pgadmin is pulling the list from the system catalogs.



What's the right/best practice approach? revoke all from public on 
specific system catalog tables? Which tables?


AFAIK, you can not hide the list of schemas but you can prevent people 
from entering them.



JD




Thanks in advance




--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Re: system catalog permissions

2018-02-26 Thread Tom Lane
PropAAS DBA  writes:
> We have a client which is segmenting their multi-tenant cluster 
> (PostgreSQL 9.6) by schema, however if one of their clients connects via 
> pgadmin they see ALL schemas, even the ones they don't have access to 
> read. I assume pgadmin is pulling the list from the system catalogs.

> What's the right/best practice approach? revoke all from public on 
> specific system catalog tables? Which tables?

Messing with the system catalog permissions is likely to break stuff
you'd rather not break.

PG generally doesn't assume that anything in the system catalogs is
sensitive.  If you don't want user A looking at user B's catalog
entries, give them separate databases, not just separate schemas.

regards, tom lane



Re: system catalog permissions

2018-02-26 Thread Paul Jungwirth

On 02/26/2018 03:47 PM, Tom Lane wrote:

PropAAS DBA  writes:

We have a client which is segmenting their multi-tenant cluster
(PostgreSQL 9.6) by schema, however if one of their clients connects via
pgadmin they see ALL schemas, even the ones they don't have access to
read.

PG generally doesn't assume that anything in the system catalogs is
sensitive.  If you don't want user A looking at user B's catalog
entries, give them separate databases, not just separate schemas.


I'm sure this is what you meant, but you need to give them separate 
*clusters*, right? Even with separate databases you can still get a list 
of the other databases and other roles in the cluster. I would actually 
love to be mistaken but when I looked at it a year or two ago I couldn't 
find a way to lock that down (without breaking a lot of tools anyway).


Thanks!

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: system catalog permissions

2018-02-26 Thread David G. Johnston
On Mon, Feb 26, 2018 at 4:55 PM, Paul Jungwirth  wrote:

> On 02/26/2018 03:47 PM, Tom Lane wrote:
>
>> PropAAS DBA  writes:
>>
>>> We have a client which is segmenting their multi-tenant cluster
>>> (PostgreSQL 9.6) by schema, however if one of their clients connects via
>>> pgadmin they see ALL schemas, even the ones they don't have access to
>>> read.
>>>
>> PG generally doesn't assume that anything in the system catalogs is
>> sensitive.  If you don't want user A looking at user B's catalog
>> entries, give them separate databases, not just separate schemas.
>>
>
> I'm sure this is what you meant, but you need to give them separate
> *clusters*, right? Even with separate databases you can still get a list of
> the other databases and other roles in the cluster. I would actually love
> to be mistaken but when I looked at it a year or two ago I couldn't find a
> way to lock that down (without breaking a lot of tools anyway).
>

​Yes, both the database and role namespace is global to an individual
cluster.  Its another level of trade-off; database and role names could
realistically and easily be done UUID-style so knowing the labels doesn't
really tell anything except a vague impression of host size.

Assuming clients don't want to see their log files...

David J.


Re: Unexpected behavior with transition tables in update statement trigger

2018-02-26 Thread Thomas Munro
On Tue, Feb 27, 2018 at 4:18 AM, Tom Kazimiers  wrote:
> On Mon, Feb 26, 2018 at 11:15:44PM +1300, Thomas Munro wrote:
>> On Sat, Feb 24, 2018 at 4:47 PM, Tom Kazimiers 
>> wrote:
>> Thanks for the reproducer.  Yeah, that seems to be a bug.
>> nodeNamedTuplestorescan.c allocates a new read pointer for each
>> separate scan of the named tuplestore, but it doesn't call
>> tuplestore_select_read_pointer() so that the two scans that appear in
>> your UNION ALL plan are sharing the same read pointer.  At first
>> glance the attached seems to fix the problem, but I'll need to look
>> more carefully tomorrow.
>
> Thanks very much for investigating this. I can confirm that applying your
> patch results in the tuples I expected in both my test trigger and my actual
> trigger function.

Thanks for testing.

> It would be great if this or a similar fix would make it into the next
> official release.

Here's a new version with tuplestore_select_read_pointer() added in
another place where it was lacking, and commit message.  Moving to
-hackers, where patches go.

Here's a shorter repro.  On master it prints:

NOTICE:  count = 1
NOTICE:  count union = 1

With the patch the second number is 2, as it should be.

CREATE TABLE test (i int);
INSERT INTO test VALUES (1);

CREATE OR REPLACE FUNCTION my_trigger_fun() RETURNS trigger
LANGUAGE plpgsql AS
$$
  BEGIN
 RAISE NOTICE 'count = %', (SELECT COUNT(*) FROM new_test);
 RAISE NOTICE 'count union = %', (SELECT COUNT(*)
  FROM (SELECT * FROM new_test UNION ALL SELECT * FROM new_test) ss);
 RETURN NULL;
 END;
$$;

CREATE TRIGGER my_trigger AFTER UPDATE ON test
REFERENCING NEW TABLE AS new_test OLD TABLE as old_test
FOR EACH STATEMENT EXECUTE PROCEDURE my_trigger_fun();

UPDATE test SET i = i;

-- 
Thomas Munro
http://www.enterprisedb.com


0001-Fix-tuplestore-read-pointer-confusion-in-nodeNamedtu.patch
Description: Binary data


Re: system catalog permissions

2018-02-26 Thread Melvin Davidson
On Mon, Feb 26, 2018 at 7:50 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Feb 26, 2018 at 4:55 PM, Paul Jungwirth <
> p...@illuminatedcomputing.com> wrote:
>
>> On 02/26/2018 03:47 PM, Tom Lane wrote:
>>
>>> PropAAS DBA  writes:
>>>
 We have a client which is segmenting their multi-tenant cluster
 (PostgreSQL 9.6) by schema, however if one of their clients connects via
 pgadmin they see ALL schemas, even the ones they don't have access to
 read.

>>> PG generally doesn't assume that anything in the system catalogs is
>>> sensitive.  If you don't want user A looking at user B's catalog
>>> entries, give them separate databases, not just separate schemas.
>>>
>>
>> I'm sure this is what you meant, but you need to give them separate
>> *clusters*, right? Even with separate databases you can still get a list of
>> the other databases and other roles in the cluster. I would actually love
>> to be mistaken but when I looked at it a year or two ago I couldn't find a
>> way to lock that down (without breaking a lot of tools anyway).
>>
>
> ​Yes, both the database and role namespace is global to an individual
> cluster.  Its another level of trade-off; database and role names could
> realistically and easily be done UUID-style so knowing the labels doesn't
> really tell anything except a vague impression of host size.
>
> Assuming clients don't want to see their log files...
>
> David J.
>
>
>... both the database and role namespace is global to an individual
cluster



*Slight correction to that.*





*https://www.postgresql.org/docs/10/static/runtime-config-connection.html
by
defaultdb_user_namespace = off *
*However, if set = on, then " you should create users as username@dbname "
which makes role names specific to each database.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Parallel Aware

2018-02-26 Thread Marwan Almaymoni
​Hi,

I'm trying to run my query in parallel mode. I have setup my tables with "
WITH(parallel_workers=2)" storage parameter and I've created indexes for
needed attributes.
I have also set:
max_worker_processes = 100
max_parallel_workers_per_gather = 2
max_parallel_workers = 100

However, when I uses EXPLAIN to check the query plan​, *all of the nodes
are set to "Parallel Aware"=false*. *Am I missing something?*

Here is my tables and indexes:
CREATE TABLE t1(id int PRIMARY KEY, name varchar(200))
WITH(parallel_workers=2);
CREATE TABLE t2(id int PRIMARY KEY, fid int, value varchar(200))
WITH(parallel_workers=2);
CREATE INDEX ind_t1_id ON t1 USING HASH (id);
CREATE INDEX ind_t2_fid ON t2 USING HASH (fid);

Here's the query I tested with EXPLAIN:
SELECT * FROM t1 INNER JOIN t2 ON t1.id = t2.fid WHERE t1.id > 100;


Re: is libpq and openssl 1.1.* compatible?

2018-02-26 Thread Konstantin Izmailov
Thank you everyone who posted answers! I went back to openssl-1.0.2.

On Mon, Feb 26, 2018 at 1:15 AM, Michael Paquier 
wrote:

> On Mon, Feb 26, 2018 at 12:30:38AM -0700, Konstantin Izmailov wrote:
> > Let me ask this differently: can Visual Studio 2013/2017 compile libpq
> with
> > openssl 1.1 support? Under Windows?
>
> The answer to this question should be yes.  (Please note that no
> Windows buildfarm machines use openssl 1.1.0 as far as I can see, I
> myself build stuff with 1.0.2 on Windows.)
>
> > I have not been able to find an answer by googling before asking this
> > question here. Can someone share the compiled library and changes in the
> > source if needed for the openssl 1.1? Magnus please.
>
> No changes should be needed as far as I know in the scripts in
> src/tools/msvc.  The set of APIs present in 1.1.0 is the same whatever
> the platform so the compatibility is the same, and the dependent
> libraries should be ssleay32.lib and libeay32.lib whose location depend
> on your installation of OpenSSL.
> --
> Michael
>


Why is tuple_percent so low?

2018-02-26 Thread Sam Saffron
I am trying to refactor a table on disk so it consumes less space:

Original is:

create table post_timings(
   topic_id int not null,
   post_number int not null,
   user_id int not null,
   msecs int not null
)


Target is:

create table post_timings(
  post_id int not null,
  user_id int not null,
  dsecs smallint not null
)


Before I have:

select * from pgstattuple('post_timings2');

 table_len  | tuple_count | tuple_len  | tuple_percent |
dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space |
free_percent
+-++---+--++++--
 5146427392 |   116221695 | 4648867800 | 90.33 |
 0 |  0 |  0 |   15082484 | 0.29


After I have:

 table_len  | tuple_count | tuple_len  | tuple_percent |
dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space |
free_percent
+-++---+--++++--
 5142036480 |   116122544 | 3948166496 | 76.78 |
 0 |  0 |  0 |   15069224 | 0.29


What I find striking is that the table size on disk remains almost
unchanged despite tuples taking 6 less bytes per tuple.

All the "missing space" is in overhead that is missing from
pgstattuple, in particular tuple percent moves from 90 to 76.7

I was wondering:

1. Where is all my missing space, is this in page alignment stuff and
per-page overhead?

2. Is there any other schemes I can look at for storing this data to
have a more efficient yet easily queryable / updateable table.

Keep in mind these tables get huge and in many of our cases will span
10-20GB just to store this information.

Sam



Re: Creating complex track changes database - challenge!

2018-02-26 Thread Łukasz Jarych
Thank you goeff.

I need solution like this:

Administrator push button or something like and adding comment (for
bitbucket) that after creating update to database.
Now whole database is exported to *.sql file, and commit with text provided
by Admin.

Can i connect using bash script to database and take variable inputed by
Admin?

Best,
Jacek

2018-02-26 21:45 GMT+01:00 geoff hoffman :

> I would personally do that separately: write a bash script & cron job that
> does a schema dump every hour, and (if there are any changes) commits any
> changes to your schema repository; then you can use Github or Bitbucket web
> hooks to do stuff with the changeset when it’s pushed.
>
> https://stackoverflow.com/questions/3878624/how-do-i-
> programmatically-determine-if-there-are-uncommitted-changes
> https://stackoverflow.com/questions/24772591/check-if-git-has-changes-
> programmatically
>
>
>
> On Feb 26, 2018, at 9:36 AM, Łukasz Jarych  wrote:
>
> Thank you geoff!
>
> I think that i will test http://www.liquibase.org/ this one.
>
> what about setting up trigger to metadata (structural table) to find if
> column was added for example?
>
> Best,
> Jacek
>
> 2018-02-26 16:43 GMT+01:00 geoff hoffman :
>
>>
>>
>> There’s https://flywaydb.org/
>> and http://www.liquibase.org/
>>
>> More: https://dbmstools.com/version-control-tools
>>
>> Also, if you know PHP, Laravel database migrations have worked great for
>> us!
>> https://laravel.com/docs/5.6/migrations
>>
>>
>>
>> On Feb 26, 2018, at 3:44 AM, Łukasz Jarych  wrote:
>>
>> i would like to ask you for help with track changes to my database.
>> I am new to PosgtreeSQL but i have to learn it quickly because of my
>> boss.
>>
>> I have to:
>>
>> 1. Keep all changes within table including:
>> -adding rows
>> -deleting
>> -editing
>>
>> 2. Save table with specific state and recover specific state (so go back
>> to previous table versions) including comparing tables.
>>
>> 3. Track all DLL and DML changes with possibility to ho back to previous
>> version.
>>
>>
>>
>
>


Re: Creating complex track changes database - challenge!

2018-02-26 Thread Thiemo Kellner

Hi Lukasz

I am working on a generic (reading the information schema and other  
database metadata), trigger based solution for SCD tables, i. e.  
tables that keep (or not according to SCD type) history of the data.  
However, it is not far grown and I am not having much time to advance  
it so it evolves very slowly. If you are interested, I would open a  
sourceforge project or the like and we can work on it together.


I am very much surprised that no database I know of supports  
SCD/historising tables out of the box. In 16 years as ETL pro I have  
seen reinvented the wheel all the time... maybe PostgreSQL wants to  
get a head start on this.


Kind regards

Thiemo



Zitat von ?ukasz Jarych :


Hi Guys,

I have idea already for creating this complex solution.

Please give your notes and tips if you have.

1. Keep all changes within table including:
-adding rows
-deleting
-editing

This can be managed by adding triggers and one additional table where you
can have sum up what was changed.

2. Changing DDL of tables:

I think that creating trigger for metadata should solve the problem. How
can i do it? I do not know already ...:)

3. Changing tables versioning.

It it is possible to save table (back up or something) to disc - i can
check the latest date of change and save table with this date and name.
And create table with all tables changes and version.
What do you think ?

4. Still problem with creating whole database versioning.
I found very interesting link but i not understand how it is works:

https://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx

Best,
Jacek

2018-02-26 12:16 GMT+01:00 ?ukasz Jarych :


Hi Manual,
thank you very much!

Regarding your  tool - if it is not supported and it is for specific case
- i will not use it but figure out something new. I do not even how to
install this .hs files...

I thought about creating triggers to have all changes to specific tables.
And for each table name (or number) keep changes in one separate table.
What do you think about it?

If all you need is saving and restoring specific table states, logical

dumps with pg_dump should probably be enough for your needs.



Can you explain in details how can i use it?
What if user add new column? I can save ma table for example as version 3
and come back to version 1 in the future? (without this new column?)

Best,
Jacek



2018-02-26 12:04 GMT+01:00 Manuel Gómez :


On Mon, Feb 26, 2018 at 11:44 AM ?ukasz Jarych 
wrote:


I have to:

1. Keep all changes within table including:
-adding rows
-deleting
-editing

2. Save table with specific state and recover specific state (so go back
to previous table versions) including comparing tables.

3. Track all DLL and DML changes with possibility to ho back to previous
version.



Hi,

I had similar needs long ago, so I wrote this tool I called Squealer,
which would transform a specification of a SQL database schema into some
PostgreSQL DDL to create a database that implements the same schema in a
logical sense, but actually stores all historical rows, and even permits
existing rows to have references to (soft-)deleted rows, all while
providing modifiable views that simulate the behavior of a regular table as
specified in the input schema through generous use of INSTEAD OF triggers.
It works somewhat like having version control for your database.

You may find the source code here: https://github.com/mgomezch/squealer
Unfortunately, it has hardly any comments, it is completely unmaintained
and probably unused anywhere, I have no idea whether it builds with today's
libraries, and it does not necessarily break the tradeoffs in this space in
a way that fits your use case.

Note there are major caveats with keeping all historical data around
forever, and the semantics of querying historical data can get complicated,
let alone having current data refer to deleted, historical data.  I built
this for a very specific use case where this was the right design, but
please consider very carefully whether this is what you want.

Storing your database history forever would take a lot of space.
Consider whether you can instead keep a record of changes stored outside
the database in some cheap cold storage.  Also consider just keeping a set
of tables with dynamically structured event records (e.g. JSON fields)
partitioned by time ranges and retained only temporarily, perhaps even in a
separate database.  Any such solution will have significant cost and
performance impact if your database bears a nontrivial load, so be careful.

You could also just place your database on a PostgreSQL cluster by itself
and then keep all WAL segments archived forever, so you could just do
point-in-time recovery to any point in the history of your database.  The
space required would grow very quickly, though, so if you don't really need
the full history forever, but only a fixed retention period, you can surely
use any of the well-known solutions for PostgreSQL bac

Re: Creating complex track changes database - challenge!

2018-02-26 Thread Łukasz Jarych
Hi Thiemo,

you can share the repository, maybe when i will go more into PostgreSQL i
would help you.

*Regarding table versionig.*
I am thinking about simple solution:
1. Create query or trigger which will be checking last date of inputed data
within Table.
2. Export the table into seperate file/back up/structure using
Date_TableName.
3. Have a table where i would complete all history together in one place.

*Regarding Database Versioning*
Write script which will be exporting whole database into *.sql file when
administrator wants. And adding variable with description, for example
"Added new table to database".
Next commit this file into bitbucket automatically using bash script and
compare results via bitbucket.

What do you think ?

Best,
Jacek



2018-02-27 8:11 GMT+01:00 Thiemo Kellner :

> Hi Lukasz
>
> I am working on a generic (reading the information schema and other
> database metadata), trigger based solution for SCD tables, i. e. tables
> that keep (or not according to SCD type) history of the data. However, it
> is not far grown and I am not having much time to advance it so it evolves
> very slowly. If you are interested, I would open a sourceforge project or
> the like and we can work on it together.
>
> I am very much surprised that no database I know of supports
> SCD/historising tables out of the box. In 16 years as ETL pro I have seen
> reinvented the wheel all the time... maybe PostgreSQL wants to get a head
> start on this.
>
> Kind regards
>
> Thiemo
>
>
>
> Zitat von ?ukasz Jarych :
>
> Hi Guys,
>>
>> I have idea already for creating this complex solution.
>>
>> Please give your notes and tips if you have.
>>
>> 1. Keep all changes within table including:
>> -adding rows
>> -deleting
>> -editing
>>
>> This can be managed by adding triggers and one additional table where you
>> can have sum up what was changed.
>>
>> 2. Changing DDL of tables:
>>
>> I think that creating trigger for metadata should solve the problem. How
>> can i do it? I do not know already ...:)
>>
>> 3. Changing tables versioning.
>>
>> It it is possible to save table (back up or something) to disc - i can
>> check the latest date of change and save table with this date and name.
>> And create table with all tables changes and version.
>> What do you think ?
>>
>> 4. Still problem with creating whole database versioning.
>> I found very interesting link but i not understand how it is works:
>>
>> https://odetocode.com/blogs/scott/archive/2008/02/02/version
>> ing-databases-change-scripts.aspx
>>
>> Best,
>> Jacek
>>
>> 2018-02-26 12:16 GMT+01:00 ?ukasz Jarych :
>>
>> Hi Manual,
>>> thank you very much!
>>>
>>> Regarding your  tool - if it is not supported and it is for specific case
>>> - i will not use it but figure out something new. I do not even how to
>>> install this .hs files...
>>>
>>> I thought about creating triggers to have all changes to specific tables.
>>> And for each table name (or number) keep changes in one separate table.
>>> What do you think about it?
>>>
>>> If all you need is saving and restoring specific table states, logical
>>>
 dumps with pg_dump should probably be enough for your needs.

>>>
>>>
>>> Can you explain in details how can i use it?
>>> What if user add new column? I can save ma table for example as version 3
>>> and come back to version 1 in the future? (without this new column?)
>>>
>>> Best,
>>> Jacek
>>>
>>>
>>>
>>> 2018-02-26 12:04 GMT+01:00 Manuel Gómez :
>>>
>>> On Mon, Feb 26, 2018 at 11:44 AM ?ukasz Jarych 

 wrote:

 I have to:
>
> 1. Keep all changes within table including:
> -adding rows
> -deleting
> -editing
>
> 2. Save table with specific state and recover specific state (so go
> back
> to previous table versions) including comparing tables.
>
> 3. Track all DLL and DML changes with possibility to ho back to
> previous
> version.
>
>
 Hi,

 I had similar needs long ago, so I wrote this tool I called Squealer,
 which would transform a specification of a SQL database schema into some
 PostgreSQL DDL to create a database that implements the same schema in a
 logical sense, but actually stores all historical rows, and even permits
 existing rows to have references to (soft-)deleted rows, all while
 providing modifiable views that simulate the behavior of a regular
 table as
 specified in the input schema through generous use of INSTEAD OF
 triggers.
 It works somewhat like having version control for your database.

 You may find the source code here: https://github.com/mgomezch/squealer
 Unfortunately, it has hardly any comments, it is completely unmaintained
 and probably unused anywhere, I have no idea whether it builds with
 today's
 libraries, and it does not necessarily break the tradeoffs in this
 space in
 a way that fits your use case.

 Note there are major caveats with keeping all histor

Re: Creating complex track changes database - challenge!

2018-02-26 Thread Thiemo Kellner
I attached what I have got so far. I will setup a shared repository  
these days.


Zitat von Thiemo Kellner :


Hi Lukasz

I am working on a generic (reading the information schema and other  
database metadata), trigger based solution for SCD tables, i. e.  
tables that keep (or not according to SCD type) history of the data.  
However, it is not far grown and I am not having much time to  
advance it so it evolves very slowly. If you are interested, I would  
open a sourceforge project or the like and we can work on it together.


I am very much surprised that no database I know of supports  
SCD/historising tables out of the box. In 16 years as ETL pro I have  
seen reinvented the wheel all the time... maybe PostgreSQL wants to  
get a head start on this.


Kind regards

Thiemo


--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.


full_install.7z
Description: Binary data


Re: is libpq and openssl 1.1.* compatible?

2018-02-26 Thread Michael Paquier
On Mon, Feb 26, 2018 at 12:30:38AM -0700, Konstantin Izmailov wrote:
> Let me ask this differently: can Visual Studio 2013/2017 compile libpq with
> openssl 1.1 support? Under Windows?

The answer to this question should be yes.  (Please note that no
Windows buildfarm machines use openssl 1.1.0 as far as I can see, I
myself build stuff with 1.0.2 on Windows.)

> I have not been able to find an answer by googling before asking this
> question here. Can someone share the compiled library and changes in the
> source if needed for the openssl 1.1? Magnus please.

No changes should be needed as far as I know in the scripts in
src/tools/msvc.  The set of APIs present in 1.1.0 is the same whatever
the platform so the compatibility is the same, and the dependent
libraries should be ssleay32.lib and libeay32.lib whose location depend
on your installation of OpenSSL.
--
Michael


signature.asc
Description: PGP signature


Re: Unexpected behavior with transition tables in update statement trigger

2018-02-26 Thread Thomas Munro
On Sat, Feb 24, 2018 at 4:47 PM, Tom Kazimiers  wrote:
> I am on Postgres 10.2 and try to get a statement level trigger to work that
> is executed after UPDATE statements on a particular table. This trigger
> references both the old and new transition table and for some reason I am
> unable to reference each transition table multiple times in a CTE or
> subquery. E.g. forming a UNION ALL with all rows of the new transition table
> with itself, does only use the new table row once. I don't understand why
> and would appreciate some insight.

Thanks for the reproducer.  Yeah, that seems to be a bug.
nodeNamedTuplestorescan.c allocates a new read pointer for each
separate scan of the named tuplestore, but it doesn't call
tuplestore_select_read_pointer() so that the two scans that appear in
your UNION ALL plan are sharing the same read pointer.  At first
glance the attached seems to fix the problem, but I'll need to look
more carefully tomorrow.

-- 
Thomas Munro
http://www.enterprisedb.com


named-tuplestore-scan-select.patch
Description: Binary data


Creating complex track changes database - challenge!

2018-02-26 Thread Łukasz Jarych
Hi Guys,

i would like to ask you for help with track changes to my database.
I am new to PosgtreeSQL but i have to learn it quickly because of my boss.

I have to:

1. Keep all changes within table including:
-adding rows
-deleting
-editing

2. Save table with specific state and recover specific state (so go back to
previous table versions) including comparing tables.

3. Track all DLL and DML changes with possibility to ho back to previous
version.

Any tips will be welcome,
Best,
Jacek


Re: Creating complex track changes database - challenge!

2018-02-26 Thread Manuel Gómez
On Mon, Feb 26, 2018 at 11:44 AM Łukasz Jarych  wrote:

> I have to:
>
> 1. Keep all changes within table including:
> -adding rows
> -deleting
> -editing
>
> 2. Save table with specific state and recover specific state (so go back
> to previous table versions) including comparing tables.
>
> 3. Track all DLL and DML changes with possibility to ho back to previous
> version.
>

Hi,

I had similar needs long ago, so I wrote this tool I called Squealer, which
would transform a specification of a SQL database schema into some
PostgreSQL DDL to create a database that implements the same schema in a
logical sense, but actually stores all historical rows, and even permits
existing rows to have references to (soft-)deleted rows, all while
providing modifiable views that simulate the behavior of a regular table as
specified in the input schema through generous use of INSTEAD OF triggers.
It works somewhat like having version control for your database.

You may find the source code here: https://github.com/mgomezch/squealer
Unfortunately, it has hardly any comments, it is completely unmaintained
and probably unused anywhere, I have no idea whether it builds with today's
libraries, and it does not necessarily break the tradeoffs in this space in
a way that fits your use case.

Note there are major caveats with keeping all historical data around
forever, and the semantics of querying historical data can get complicated,
let alone having current data refer to deleted, historical data.  I built
this for a very specific use case where this was the right design, but
please consider very carefully whether this is what you want.

Storing your database history forever would take a lot of space.  Consider
whether you can instead keep a record of changes stored outside the
database in some cheap cold storage.  Also consider just keeping a set of
tables with dynamically structured event records (e.g. JSON fields)
partitioned by time ranges and retained only temporarily, perhaps even in a
separate database.  Any such solution will have significant cost and
performance impact if your database bears a nontrivial load, so be careful.

You could also just place your database on a PostgreSQL cluster by itself
and then keep all WAL segments archived forever, so you could just do
point-in-time recovery to any point in the history of your database.  The
space required would grow very quickly, though, so if you don't really need
the full history forever, but only a fixed retention period, you can surely
use any of the well-known solutions for PostgreSQL backups that allow for
this through WAL archiving and periodic basebackups: e.g. WAL-E, WAL-G,
pgBackRest…

If all you need is saving and restoring specific table states, logical
dumps with pg_dump should probably be enough for your needs.


Re: Creating complex track changes database - challenge!

2018-02-26 Thread Thomas Kellerer
Łukasz Jarych schrieb am 26.02.2018 um 11:44:
> i would like to ask you for help with track changes to my database. 
> I am new to PosgtreeSQL but i have to learn it quickly because of my boss. 
> 
> I have to:
> 
> 1. Keep all changes within table including:
> -adding rows
> -deleting
> -editing
>
> 2. Save table with specific state and recover specific state (so go back to 
> previous table versions) including comparing tables.

There are several generic auditing triggers that can do that:

* http://cjauvin.blogspot.de/2013/05/impossibly-lean-audit-system-for.html
* https://eager.io/blog/audit-postgres/
* http://okbob.blogspot.de/2015/01/most-simply-implementation-of-history.html
* 
http://8kb.co.uk/blog/2015/01/19/copying-pavel-stehules-simple-history-table-but-with-the-jsonb-type/
* https://www.garysieling.com/blog/auditing-data-changes-postgres
* https://github.com/wingspan/wingspan-auditing
* https://wiki.postgresql.org/wiki/Audit_trigger_91plus

> 3. Track all DLL and DML changes with possibility to ho back to previous 
> version. 

That will be very tricky, especially the "go back to previous version" part. 

But in general DDL changes can be tracked using event triggers. 

Thomas



Re: Creating complex track changes database - challenge!

2018-02-26 Thread Łukasz Jarych
Hi Guys,

I have idea already for creating this complex solution.

Please give your notes and tips if you have.

1. Keep all changes within table including:
-adding rows
-deleting
-editing

This can be managed by adding triggers and one additional table where you
can have sum up what was changed.

2. Changing DDL of tables:

I think that creating trigger for metadata should solve the problem. How
can i do it? I do not know already ...:)

3. Changing tables versioning.

It it is possible to save table (back up or something) to disc - i can
check the latest date of change and save table with this date and name.
And create table with all tables changes and version.
What do you think ?

4. Still problem with creating whole database versioning.
I found very interesting link but i not understand how it is works:

https://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx

Best,
Jacek

2018-02-26 12:16 GMT+01:00 Łukasz Jarych :

> Hi Manual,
> thank you very much!
>
> Regarding your  tool - if it is not supported and it is for specific case
> - i will not use it but figure out something new. I do not even how to
> install this .hs files...
>
> I thought about creating triggers to have all changes to specific tables.
> And for each table name (or number) keep changes in one separate table.
> What do you think about it?
>
> If all you need is saving and restoring specific table states, logical
>> dumps with pg_dump should probably be enough for your needs.
>
>
> Can you explain in details how can i use it?
> What if user add new column? I can save ma table for example as version 3
> and come back to version 1 in the future? (without this new column?)
>
> Best,
> Jacek
>
>
>
> 2018-02-26 12:04 GMT+01:00 Manuel Gómez :
>
>> On Mon, Feb 26, 2018 at 11:44 AM Łukasz Jarych 
>> wrote:
>>
>>> I have to:
>>>
>>> 1. Keep all changes within table including:
>>> -adding rows
>>> -deleting
>>> -editing
>>>
>>> 2. Save table with specific state and recover specific state (so go back
>>> to previous table versions) including comparing tables.
>>>
>>> 3. Track all DLL and DML changes with possibility to ho back to previous
>>> version.
>>>
>>
>> Hi,
>>
>> I had similar needs long ago, so I wrote this tool I called Squealer,
>> which would transform a specification of a SQL database schema into some
>> PostgreSQL DDL to create a database that implements the same schema in a
>> logical sense, but actually stores all historical rows, and even permits
>> existing rows to have references to (soft-)deleted rows, all while
>> providing modifiable views that simulate the behavior of a regular table as
>> specified in the input schema through generous use of INSTEAD OF triggers.
>> It works somewhat like having version control for your database.
>>
>> You may find the source code here: https://github.com/mgomezch/squealer
>> Unfortunately, it has hardly any comments, it is completely unmaintained
>> and probably unused anywhere, I have no idea whether it builds with today's
>> libraries, and it does not necessarily break the tradeoffs in this space in
>> a way that fits your use case.
>>
>> Note there are major caveats with keeping all historical data around
>> forever, and the semantics of querying historical data can get complicated,
>> let alone having current data refer to deleted, historical data.  I built
>> this for a very specific use case where this was the right design, but
>> please consider very carefully whether this is what you want.
>>
>> Storing your database history forever would take a lot of space.
>> Consider whether you can instead keep a record of changes stored outside
>> the database in some cheap cold storage.  Also consider just keeping a set
>> of tables with dynamically structured event records (e.g. JSON fields)
>> partitioned by time ranges and retained only temporarily, perhaps even in a
>> separate database.  Any such solution will have significant cost and
>> performance impact if your database bears a nontrivial load, so be careful.
>>
>> You could also just place your database on a PostgreSQL cluster by itself
>> and then keep all WAL segments archived forever, so you could just do
>> point-in-time recovery to any point in the history of your database.  The
>> space required would grow very quickly, though, so if you don't really need
>> the full history forever, but only a fixed retention period, you can surely
>> use any of the well-known solutions for PostgreSQL backups that allow for
>> this through WAL archiving and periodic basebackups: e.g. WAL-E, WAL-G,
>> pgBackRest…
>>
>> If all you need is saving and restoring specific table states, logical
>> dumps with pg_dump should probably be enough for your needs.
>>
>
>


merge statement gives error

2018-02-26 Thread Abhra Kar
Hi,

   Trying to execute the following statement 


merge into ABC as n using dual on (n.id=123)

when matched update set aaa=222, bbb=333

 when not matched insert (id, aaa) values (NEXTVAL(id),555);


but gives syntax error.What should be the proper syntax[ Parameter values
are properly passed based on data type].

Thanks


Re: Unexpected behavior with transition tables in update statement trigger

2018-02-26 Thread Tom Kazimiers

Hi Thomas,

On Mon, Feb 26, 2018 at 11:15:44PM +1300, Thomas Munro wrote:

On Sat, Feb 24, 2018 at 4:47 PM, Tom Kazimiers  wrote:
Thanks for the reproducer.  Yeah, that seems to be a bug.
nodeNamedTuplestorescan.c allocates a new read pointer for each
separate scan of the named tuplestore, but it doesn't call
tuplestore_select_read_pointer() so that the two scans that appear in
your UNION ALL plan are sharing the same read pointer.  At first
glance the attached seems to fix the problem, but I'll need to look
more carefully tomorrow.


Thanks very much for investigating this. I can confirm that applying 
your patch results in the tuples I expected in both my test trigger and 
my actual trigger function.


It would be great if this or a similar fix would make it into the next 
official release.


Cheers,
Tom



Re: merge statement gives error

2018-02-26 Thread Thomas Kellerer
Abhra Kar schrieb am 26.02.2018 um 16:02:
> Hi,
> 
>        Trying to execute the following statement  
> 
> 
> merge into ABC as n using dual on (n.id =123)
> 
> when matched update set aaa=222, bbb=333
> 
>  when not matched insert (id, aaa) values (NEXTVAL(id),555);
> 
> 
> 
> but gives syntax error.What should be the proper syntax[ Parameter values are 
> properly passed based on data type].

There is no MERGE statement in Postgres (there might be in Postgres 11, but not 
currently). 

You will need to use INSERT ON CONFLICT instead. 





Re: Creating complex track changes database - challenge!

2018-02-26 Thread geoff hoffman


There’s https://flywaydb.org/ 
and http://www.liquibase.org/ 

More: https://dbmstools.com/version-control-tools 
 

Also, if you know PHP, Laravel database migrations have worked great for us!
https://laravel.com/docs/5.6/migrations 




> On Feb 26, 2018, at 3:44 AM, Łukasz Jarych  wrote:
> 
> i would like to ask you for help with track changes to my database. 
> I am new to PosgtreeSQL but i have to learn it quickly because of my boss. 
> 
> I have to:
> 
> 1. Keep all changes within table including:
> -adding rows
> -deleting
> -editing
> 
> 2. Save table with specific state and recover specific state (so go back to 
> previous table versions) including comparing tables.
> 
> 3. Track all DLL and DML changes with possibility to ho back to previous 
> version. 
> 



Re: Creating complex track changes database - challenge!

2018-02-26 Thread Alan Gano
I've got a manual method (though it's probably wise to go with a vendor
product), that I will just dump here.

It tracks all configured tables into a single table containing before/after
record images in jsonb.



create table aud_audit
(
   id   serial8,
   timestamptimestamptz default now() NOT NULL,
   app_user_id  int8 NOT NULL,
   operationvarchar(8) NOT NULL,
   table_name   varchar(100) NOT NULL,
   before_image jsonb,
   after_image  jsonb,
   
   constraint aud_audit_pk primary key(id)
)
;

create or replace function audit_all() returns trigger as
$$
declare
   t_before   jsonb := NULL;
   t_afterjsonb := NULL;
   t_user_id  int8;
begin
   begin
  t_user_id := current_setting('app.user_id')::int8;
   exception
  when OTHERS then
 t_user_id := -1;
   end;

   case tg_op
  when 'INSERT' then
 t_after := row_to_json(new.*);
  when 'UPDATE' then
 t_before := row_to_json(old.*);
 t_after := row_to_json(new.*);
  when 'DELETE' then
 t_before := row_to_json(old.*);
  when 'TRUNCATE' then
 t_before := row_to_json(old.*);
   end case;

   insert into aud_audit
   (
  app_user_id,
  operation,
  table_name,
  before_image,
  after_image
   )
   values(
  t_user_id,
  tg_op,
  tg_table_name,
  t_before,
  t_after
   );

   return
  case tg_op
 when 'INSERT' then new
 when 'UPDATE' then new
 when 'DELETE' then old
 when 'TRUNCATE' then old
  end;
end;
$$
language plpgsql
;


*for each table you want to track ...*

create trigger _audit_t01
   before insert or update or delete
   on 
   for each row execute procedure audit_all()
;



On Mon, Feb 26, 2018 at 7:43 AM, geoff hoffman  wrote:

>
>
> There’s https://flywaydb.org/
> and http://www.liquibase.org/
>
> More: https://dbmstools.com/version-control-tools
>
> Also, if you know PHP, Laravel database migrations have worked great for
> us!
> https://laravel.com/docs/5.6/migrations
>
>
>
> On Feb 26, 2018, at 3:44 AM, Łukasz Jarych  wrote:
>
> i would like to ask you for help with track changes to my database.
> I am new to PosgtreeSQL but i have to learn it quickly because of my boss.
>
> I have to:
>
> 1. Keep all changes within table including:
> -adding rows
> -deleting
> -editing
>
> 2. Save table with specific state and recover specific state (so go back
> to previous table versions) including comparing tables.
>
> 3. Track all DLL and DML changes with possibility to ho back to previous
> version.
>
>
>