Re: FW: Setting up streaming replication problems

2018-01-26 Thread Thiemo Kellner

Do you mind me bumping the issue?

Zitat von "Thiemo Kellner, NHC Barhufpflege"  
:



Andreas, thanks for your reply.


I try to set up synchronous streaming replication as try-out. I use my
laptop with Debian 9 and PostgreSQL package 10+189.pgdg90+1. And of
this PostgreSQL installation I have two clusters main (master) and
main2 (hot standby). I tried with Rigg's book and the PostgreSQL
documentation and some pages on the web, but fail miserably.


you have one cluster with 2 nodes ;-)


Ähm, right. Been more in a hurry than I thought as it shows below as well.


Note: it's a bad idea to build a synchronous cluster with only 2 nodes,
you need at least 3 nodes


I am aware that synchronous clustering might deadlock the master. To  
have a hot standby on the same metal does not make much sense  
either. This is just a try out though. The proof of me being stupid,  
so to speak. ;-)



that's the recovery.conf, not pg_hba.conf. And you don't need it on the
master.


Copy paste blunder (see above ;-))


standby_mode = 'off'
primary_conninfo = 'host=localhost user=repuser port=5433
password='
restore_command = 'false'


why that?


Uhm, cannot remember *blush*, I removed it but it made no difference.


master or standby? confused...


C&p blunder again. Standby.


standby_mode = 'on'
primary_conninfo = 'host=localhost user=repuser port=5433
password='


the same port as above?


I keep repeating myself: C&p blunder again: nope


1) I am not sure whether to put the md5 value of the repuser password
into primary conninfo or the plain one. I don't feel the documentation
or the book is clear on that.


Anyone two dimes on that?


2) Starting the clusters, I do not see any attempt of the hot standby
to connect to the master.


I put logging back to more sane info as debug did not give me any  
useful information on that (maybe due to my noobdom). But I cannot  
see any connection attempt...



are the 2 nodes running on different ports?


Yes, 5432 master, 5433 standby


You need only 1 recovery.conf, on the standby.


But does it hurt? I just had the idea that role change would be  
easier. master <-> standby



restore_command = 'false'
is useless, i'm guessing that's the reason that the standby doesn't
connect to the master.


Uhm, is it useless or does it prevent connection? Anyway, I removed  
it and it did not make a difference.



Greetings from Dresden, Andreas


:-) Greetings from Bannholz@Hochrhein :-D

--
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.






--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.



Re: Deferrable constraint trigger

2018-01-26 Thread Laurenz Albe
Maciej Kołuda wrote:
> I am trying to write constraint trigger to assure that at any given point of 
> time I have an exactly one record "active" based on the time.
> To give you some background I would like to have an exactly one box per owner 
> active in which items will be added.
> 
> And trigger itself:
> 
> CREATE OR REPLACE FUNCTION check_active_box() RETURNS trigger AS 
> $check_active_box$
> BEGIN
> IF (select count(*) from boxes where owner_id = NEW.owner_id and 
> validity_time >= now()) > 1 THEN
> RAISE EXCEPTION 'At least one valid box exists';
> END IF;
> RETURN NEW;
> END;
> $check_active_box$ LANGUAGE plpgsql;
> 
> CREATE CONSTRAINT TRIGGER check_active_box AFTER INSERT OR UPDATE ON boxes 
> DEFERRABLE INITIALLY DEFERRED FOR EACH ROW EXECUTE PROCEDURE 
> check_active_box();
> 
> Here is the part of the pg_log:
> 
> 2018-01-25 15:10:08 CET [15256-666] myuser@db LOG:  execute : SELECT 
> 1
> 2018-01-25 15:10:08 CET [15256-667] myuser@db LOG:  execute : BEGIN
> 2018-01-25 15:10:08 CET [15256-668] myuser@db LOG:  execute : select 
> nextval ('hibernate_sequence')
> 2018-01-25 15:10:09 CET [15255-2] myuser@db LOG:  execute : SELECT 1
> 2018-01-25 15:10:09 CET [15255-3] myuser@db LOG:  execute : BEGIN
> ...
> 2018-01-25 15:10:09 CET [15254-2] myuser@db LOG:  execute : SELECT 1
> 2018-01-25 15:10:09 CET [15254-3] myuser@db LOG:  execute : BEGIN
> 
> 2018-01-25 15:10:09 CET [15254-10] myuser@db LOG:  execute : insert 
> into boxes (inserted_at, owner_id, validity_time, version, id) values ($1, 
> $2, $3, $4, $5)
> 2018-01-25 15:10:09 CET [15254-11] myuser@db DETAIL:  parameters: $1 = 
> '2018-01-25 15:10:09.245', $2 = '10', $3 = '2018-01-26 15:10:09.244', $4 = 
> '0', $5 = '5082'
> 2018-01-25 15:10:09 CET [15256-676] myuser@db LOG:  execute : insert 
> into boxes (inserted_at, owner_id, validity_time, version, id) values ($1, 
> $2, $3, $4, $5)
> 2018-01-25 15:10:09 CET [15256-677] myuser@db DETAIL:  parameters: $1 = 
> '2018-01-25 15:10:09.244', $2 = '10', $3 = '2018-01-26 15:10:09.231', $4 = 
> '0', $5 = '5080'
> 2018-01-25 15:10:09 CET [15254-12] myuser@db LOG:  execute : insert 
> into box_messages (box_id, item_id) values ($1, $2)
> 2018-01-25 15:10:09 CET [15254-13] myuser@db DETAIL:  parameters: $1 = 
> '5082', $2 = '5072'
> 2018-01-25 15:10:09 CET [15255-10] myuser@db LOG:  execute : insert 
> into boxes (inserted_at, owner_id, validity_time, version, id) values ($1, 
> $2, $3, $4, $5)
> 2018-01-25 15:10:09 CET [15255-11] myuser@db DETAIL:  parameters: $1 = 
> '2018-01-25 15:10:09.246', $2 = '10', $3 = '2018-01-26 15:10:09.232', $4 = 
> '0', $5 = '5081'
> 2018-01-25 15:10:09 CET [15256-678] myuser@db LOG:  execute : insert 
> into box_items (box_id, item_id) values ($1, $2)
> 2018-01-25 15:10:09 CET [15256-679] myuser@db DETAIL:  parameters: $1 = 
> '5080', $2 = '5070'
> 2018-01-25 15:10:09 CET [15255-12] myuser@db LOG:  execute : insert 
> into box_items (box_id, item_id) values ($1, $2)
> 2018-01-25 15:10:09 CET [15255-13] myuser@db DETAIL:  parameters: $1 = 
> '5081', $2 = '5071'
> 2018-01-25 15:10:09 CET [15256-680] myuser@db LOG:  execute S_2: COMMIT
> 2018-01-25 15:10:09 CET [15254-14] myuser@db LOG:  execute S_1: COMMIT
> 2018-01-25 15:10:09 CET [15255-14] myuser@db LOG:  execute S_1: COMMIT
> 2018-01-25 15:10:09 CET [15255-15] myuser@db ERROR:  At least one valid box 
> exists
> 2018-01-25 15:10:09 CET [15255-16] myuser@db STATEMENT:  COMMIT
> 2018-01-25 15:10:09 CET [15255-17] myuser@db LOG:  execute : BEGIN
> 
> If I read the above log correctly it looks like the last transaction box id = 
> 5081 has been rolled back properly, but the previous one 
> (2018-01-25 15:10:09 CET [15254-14] myuser@db LOG:  execute S_1: COMMIT) has 
> been commited without raising en error which results in creating second 
> unwanted box.
> 
> Could you point what possibly might be wrong with proposed constraint trigger?

There is a race condition.

The trigger runs right before the transaction commits, but if both 15256 and 
15254 commit
at the same time, the trigger functions may run concurrently and then both 
won't see the
results of the other transaction, which has not yet committed.

The window for the race condition grows with the run time of your trigger 
function,
but it will never go away.

You could make your function faster if you use
   IF EXISTS (SELECT 1 FROM boxes WHERE ...) THEN

Using SERIALIZABLE transactions would be a simple way to make sure your 
constraint is
not violated.

Yours,
Laurenz Albe



Re: AFTER UPDATE trigger updating other records

2018-01-26 Thread Ian Harding
On Thu, Jan 25, 2018 at 2:33 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Jan 25, 2018 at 3:06 PM, Ian Harding 
> wrote:
>
>>
>>   4 |3 | Top.Bar.Blah
>>   5 |4 | Top.Bar.Blah.Scooby
>> ​​
>>
>
>
>> barf$# UPDATE area SET areapath = (select areapath from area
>> a where areaid = area.parentid) || subpath(areapath, nlevel(areapath) - 1)
>> barf$# WHERE OLD.areapath @> areapath;
>>
>
> barf=# update area set parentid = 2 where areaid = 4;
>>
>>
> OLD.areapath = Top.Bar.Blah
>
> ​When its Top.Bar.Blah.Scooby 's turn to be updated its parent is 4 which
> has a value of "Top.Bar.Blah" and so nothing happens.  It doesn't matter
> whether row 4 or row 5 occurs first - when multiple rows are updated your
> subselect presents the exact same data to each row and it doesn't take into
> account updates applied to other rows made concurrently.
>
> ​Top.Bar.Blah 's parent was changed to 2 so it does go from "Top.Bar.Blah"
> to "Top.Foo.Blah"
>
> When you then run your update manually row 5 sees the newly committed
> areapath value for row 4 and now affects the change to
> "Top.Foo.Blah.Scooby"​
>
> IOW, cascading updates are not possible (regardless of whether you are
> doing them in a trigger or manually).  If you had a three-deep parent-child
> hierarchy to change in your example you would have seen that your example
> script would only have corrected the first two levels.
>
> Probably your trigger will need to capture (SELECT areapath ​.. WHERE id =
> NEW.parentid) and then use that constant in your SET clause.
>
> ​David J.
> ​
>
> For the record:

CREATE OR REPLACE FUNCTION trig_areapath_u() RETURNS trigger AS
$$
DECLARE
  ltree_parentpath ltree;
  ltree_mypath ltree;
  int_cnt int;
BEGIN
  IF TG_OP = 'UPDATE' THEN
  IF (COALESCE(OLD.parentid,0) != COALESCE(NEW.parentid,0)) THEN
-- Get the new parent path and save it
-- Get the old path for this item and save it
-- Replace the first X elements of the path for this and all
-- my children with the parent path

SELECT areapath
INTO ltree_parentpath
FROM area
WHERE areaid = NEW.parentid;

ltree_mypath := OLD.areapath;

UPDATE area SET areapath = ltree_parentpath ||
subpath(areapath, nlevel(ltree_parentpath) )
WHERE ltree_mypath @> areapath;

GET DIAGNOSTICS int_cnt = ROW_COUNT;
RAISE NOTICE 'Rows affected: %', int_cnt;
  END IF;
  END IF;

  RETURN NULL;
END
$$
LANGUAGE 'plpgsql' VOLATILE;


Information on savepoint requirement within transctions

2018-01-26 Thread Robert Zenz
I'm currently doing a small writeup of a bug fix in our framework which involves
savepoints in PostgreSQL (JDBC). However, I have a hard time locating the
documentation regarding this. I mean, from what I can extract from various
sources, PostgreSQL requires to use savepoints if one wants to continue a
transaction after a failed statement, but I can't find where in the
documentation that is stated and documented.

Can somebody point me to the correct location where this is documented and maybe
even explained why that is the case?

Re: Information on savepoint requirement within transctions

2018-01-26 Thread Melvin Davidson
On Fri, Jan 26, 2018 at 9:47 AM, Robert Zenz 
wrote:

> I'm currently doing a small writeup of a bug fix in our framework which
> involves
> savepoints in PostgreSQL (JDBC). However, I have a hard time locating the
> documentation regarding this. I mean, from what I can extract from various
> sources, PostgreSQL requires to use savepoints if one wants to continue a
> transaction after a failed statement, but I can't find where in the
> documentation that is stated and documented.
>
> Can somebody point me to the correct location where this is documented and
> maybe
> even explained why that is the case?


You have not specified which version of PostgreSQL you are using (or your
O/S), but is this the documention you are looking for?

https://www.postgresql.org/docs/9.6/static/tutorial-transactions.html

https://www.postgresql.org/docs/9.6/static/sql-savepoint.html

https://www.postgresql.org/docs/9.6/static/sql-rollback-to.html

https://www.postgresql.org/docs/9.6/static/sql-release-savepoint.html

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


RE: A little RULE help?

2018-01-26 Thread Steven Winfield
On 01/25/2018 03:14 PM, Steven Winfield wrote:
>
> https://www.postgresql.org/docs/10/static/rules-triggers.html
>
> “For the things that can be implemented by both, which is best depends
> on the usage of the database. A trigger is fired once for each
> affected row. A rule modifies the query or generates an additional
> query. So if many rows are affected in one statement, a rule issuing
> one extra command is likely to be faster than a trigger that is called
> for every single row and must re-determine what to do many times.
> However, the trigger approach is conceptually far simpler than the
> rule approach, and is easier for novices to get right.”
>

Well like I said, it may be better for what you are doing. I am not sure
but I can say from personal implementation experience that for old
school partitioning (e.g; everything before 10), triggers were so much
faster than rules that the general rule was, "don't use rules". That may
have changed since that experience.

JD

/

///


--
Command Prompt, Inc. || 
http://the.postgres.company/ || @cmdpromptinc

PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: 
https://postgresconf.org
* Unless otherwise stated, opinions are my own. *


There have been comments on- and off-list about rules generally being slower 
than rules, which seemed counterintuitive (for my case at least) so I’ve done 
some timings.
(Also note that I haven’t done any partitioning here, new- or old-style - a few 
people have mentioned RULEs in relation to partitioning).
These compare the INSERT and UPDATE rules against equivalent INSTEAD OF 
triggers. Best of three runs each time, times are in seconds.

rows 10^410^510^6
insert rule   0.915.0179
insert trigger1.319.7224
delete rule   1.822.8282
delete trigger2.328.0331

…so the rules are about 20% faster than the triggers. Significant, but not 
game-changing.
Note that this is on quite close to a “real life” table too - there is the 
maintenance of the primary key index and the gist index that supports the 
exclude constraint in all those timings, so a table without those would have 
shown a bigger disparity between the two methods.
This makes sense - the RULEs just have one query to rewrite before it is 
planned and executed, whereas the TRIGGERs have to be re-executed for each row.

Back to my original attempt at writing an UPDATE rule…

CREATE RULE rule_test_update AS ON UPDATE TO rule_test_view DO INSTEAD (
UPDATE rule_test SET tt = tstzrange(lower(tt), CURRENT_TIMESTAMP, '[)') 
WHERE id = OLD.id;
INSERT INTO rule_test (tt, foo, bar) VALUES 
(tstzrange(CURRENT_TIMESTAMP, NULL, '[)'), NEW.foo, NEW.bar) RETURNING *;
);

…I wondered whether the pseudo relations NEW and OLD were somehow being 
modified by the first command (the UPDATE), such that the second command 
(INSERT) could not function properly. That would fit with what I observe, but 
I’m not sure how I go about proving or fixing it.

Best,
Steven.


Re: Using Token (JWT) authentication mechanism in Postgres

2018-01-26 Thread Alexander Kukushkin
Hi,


>
> And also I have an additional question, can the implemented PAM
> authentication module be used from JDBC connections? Or they works totally
> apart?
>

I think it should work from JDBC without any additional effort. And
basically pam module itself doesn't even know that it is used from postgres.

Regards,
--
Alexander Kukushkin


Re: Information on savepoint requirement within transctions

2018-01-26 Thread Robert Zenz
Well, no. What I'm looking for is information on how the transactions behave in
an error case, and why there is the requirement to have a savepoint in place to
be able to continue a transaction after a failed statement.

As far as I'm aware neither PostgreSQL nor OS version do matter for this, I'm
interested in the general behavior of the database. But as I said, I do find a
lot of documentation on transactions in general, but not about their behavior in
an error case.

Your first link is "kinda" what I'm looking for, because it closes with

 > Moreover, ROLLBACK TO is the only way to regain control of a transaction
block that was put in aborted state by the system due to an error, short of
rolling it back completely and starting again.

and I'm looking on more information on *that*.


On 26.01.2018 15:59, Melvin Davidson wrote:
> On Fri, Jan 26, 2018 at 9:47 AM, Robert Zenz 
> wrote:
> 
>> I'm currently doing a small writeup of a bug fix in our framework which
>> involves
>> savepoints in PostgreSQL (JDBC). However, I have a hard time locating the
>> documentation regarding this. I mean, from what I can extract from various
>> sources, PostgreSQL requires to use savepoints if one wants to continue a
>> transaction after a failed statement, but I can't find where in the
>> documentation that is stated and documented.
>>
>> Can somebody point me to the correct location where this is documented and
>> maybe
>> even explained why that is the case?
> 
> 
> You have not specified which version of PostgreSQL you are using (or your
> O/S), but is this the documention you are looking for?
> 
> https://www.postgresql.org/docs/9.6/static/tutorial-transactions.html
> 
> https://www.postgresql.org/docs/9.6/static/sql-savepoint.html
> 
> https://www.postgresql.org/docs/9.6/static/sql-rollback-to.html
> 
> https://www.postgresql.org/docs/9.6/static/sql-release-savepoint.html
> 

Re: Information on savepoint requirement within transctions

2018-01-26 Thread Melvin Davidson
On Fri, Jan 26, 2018 at 10:32 AM, Robert Zenz 
wrote:

> Well, no. What I'm looking for is information on how the transactions
> behave in
> an error case, and why there is the requirement to have a savepoint in
> place to
> be able to continue a transaction after a failed statement.
>
> As far as I'm aware neither PostgreSQL nor OS version do matter for this,
> I'm
> interested in the general behavior of the database. But as I said, I do
> find a
> lot of documentation on transactions in general, but not about their
> behavior in
> an error case.
>
> Your first link is "kinda" what I'm looking for, because it closes with
>
>  > Moreover, ROLLBACK TO is the only way to regain control of a transaction
> block that was put in aborted state by the system due to an error, short of
> rolling it back completely and starting again.
>
> and I'm looking on more information on *that*.
>
>
> On 26.01.2018 15:59, Melvin Davidson wrote:
> > On Fri, Jan 26, 2018 at 9:47 AM, Robert Zenz  >
> > wrote:
> >
> >> I'm currently doing a small writeup of a bug fix in our framework which
> >> involves
> >> savepoints in PostgreSQL (JDBC). However, I have a hard time locating
> the
> >> documentation regarding this. I mean, from what I can extract from
> various
> >> sources, PostgreSQL requires to use savepoints if one wants to continue
> a
> >> transaction after a failed statement, but I can't find where in the
> >> documentation that is stated and documented.
> >>
> >> Can somebody point me to the correct location where this is documented
> and
> >> maybe
> >> even explained why that is the case?
> >
> >
> > You have not specified which version of PostgreSQL you are using (or your
> > O/S), but is this the documention you are looking for?
> >
> > https://www.postgresql.org/docs/9.6/static/tutorial-transactions.html
> >
> > https://www.postgresql.org/docs/9.6/static/sql-savepoint.html
> >
> > https://www.postgresql.org/docs/9.6/static/sql-rollback-to.html
> >
> > https://www.postgresql.org/docs/9.6/static/sql-release-savepoint.html
> >
>
Simply put, a SAVEPOINT does not allow you to "continue" a transaction
after an error.
What is does is allow you to commit everything up to the SAVEPOINT.
Everything after
the SAVEPOINT is not committed. There is no such thing as ROLLBACK THEN
CONTINUE,
which is what I think you are looking for.

Once again, please remember to specify your PostgreSQL version and O/S when
addressing this forum.
It helps to clarify solutions for historical purposes.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: Information on savepoint requirement within transctions

2018-01-26 Thread Robert Zenz
What I'm looking for is more information/documentation on that topic that I can
use as source and link back to (from a blog post).

That last paragraph in your first link is exactly what I meant. Let's start to
clarify things, put into pseudo code:

start transaction
insert into A
insert into B but fail
insert into C
commit

As far as I'm aware, in most other databases this would work like that, even
though one statement has failed, one can continue using this transaction and
actually commit the rows in A and C. In PostgreSQL the use of savepoints is
required:

start transaction
insert into A
create savepoint
insert into B but fail
rollback to savepoint
insert into C
commit

Otherwise the transaction is, after the failed statement, in a state in which it
can not be used anymore. Is that correct?


On 26.01.2018 16:42, Melvin Davidson wrote:
> On Fri, Jan 26, 2018 at 10:32 AM, Robert Zenz 
> wrote:
> 
>> Well, no. What I'm looking for is information on how the transactions
>> behave in
>> an error case, and why there is the requirement to have a savepoint in
>> place to
>> be able to continue a transaction after a failed statement.
>>
>> As far as I'm aware neither PostgreSQL nor OS version do matter for this,
>> I'm
>> interested in the general behavior of the database. But as I said, I do
>> find a
>> lot of documentation on transactions in general, but not about their
>> behavior in
>> an error case.
>>
>> Your first link is "kinda" what I'm looking for, because it closes with
>>
>>  > Moreover, ROLLBACK TO is the only way to regain control of a transaction
>> block that was put in aborted state by the system due to an error, short of
>> rolling it back completely and starting again.
>>
>> and I'm looking on more information on *that*.
>>
>>
>> On 26.01.2018 15:59, Melvin Davidson wrote:
>>> On Fri, Jan 26, 2018 at 9:47 AM, Robert Zenz >>
>>> wrote:
>>>
 I'm currently doing a small writeup of a bug fix in our framework which
 involves
 savepoints in PostgreSQL (JDBC). However, I have a hard time locating
>> the
 documentation regarding this. I mean, from what I can extract from
>> various
 sources, PostgreSQL requires to use savepoints if one wants to continue
>> a
 transaction after a failed statement, but I can't find where in the
 documentation that is stated and documented.

 Can somebody point me to the correct location where this is documented
>> and
 maybe
 even explained why that is the case?
>>>
>>>
>>> You have not specified which version of PostgreSQL you are using (or your
>>> O/S), but is this the documention you are looking for?
>>>
>>> https://www.postgresql.org/docs/9.6/static/tutorial-transactions.html
>>>
>>> https://www.postgresql.org/docs/9.6/static/sql-savepoint.html
>>>
>>> https://www.postgresql.org/docs/9.6/static/sql-rollback-to.html
>>>
>>> https://www.postgresql.org/docs/9.6/static/sql-release-savepoint.html
>>>
>>
> Simply put, a SAVEPOINT does not allow you to "continue" a transaction
> after an error.
> What is does is allow you to commit everything up to the SAVEPOINT.
> Everything after
> the SAVEPOINT is not committed. There is no such thing as ROLLBACK THEN
> CONTINUE,
> which is what I think you are looking for.
> 
> Once again, please remember to specify your PostgreSQL version and O/S when
> addressing this forum.
> It helps to clarify solutions for historical purposes.
> 
> 

Re: Information on savepoint requirement within transctions

2018-01-26 Thread David G. Johnston
On Fri, Jan 26, 2018 at 8:42 AM, Melvin Davidson 
wrote:

>
> On Fri, Jan 26, 2018 at 10:32 AM, Robert Zenz 
> wrote:
>

​The convention for these lists is to inline or bottom-post.  Top-posting
is discouraged.
​

> Well, no. What I'm looking for is information on how the transactions
>> behave in
>
> an error case, and why there is the requirement to have a savepoint in
>> place to
>> be able to continue a transaction after a failed statement.
>>
>
Here's my take, the docs support this but maybe take some interpretation...

A SAVEPOINT ​begins what is effectively a sub-transaction without ending
the main transaction.  If that sub-transaction fails you can throw it away
(ROLLBACK TO) and pretend that it didn't happen: you are dropped back to
the point where the savepoint was issued and the main transaction
re-engaged.

Its kinda like a try/catch block:

BEGIN:

do_stuff

SAVEPOINT try { lets_fail; this_works; } catch { ROLLBACK TO }

do_more_stuff

​COMMIT;​

​As ​long as both do_stuff and do_more_stuff succeed when you commit the
things that they did will persist.

The stuff in lets_fail AND this_works, however, will be discarded because
of the lets_fail failing and this_works belonging to the same
sub-transaction.

​If do_more_stuff depends on lets_fail or this_works succeeding then
do_more_stuff will ​fail and will cause do_stuff to rollback as well.


>> As far as I'm aware neither PostgreSQL nor OS version do matter for this
>
>
​You are correct.

> You have not specified which version of PostgreSQL you are using (or your
>> > O/S), but is this the documention you are looking for?
>> >
>> > https://www.postgresql.org/docs/9.6/static/tutorial-transactions.html
>> >
>> > https://www.postgresql.org/docs/9.6/static/sql-savepoint.html
>> >
>> > https://www.postgresql.org/docs/9.6/static/sql-rollback-to.html
>> >
>> > https://www.postgresql.org/docs/9.6/static/sql-release-savepoint.html
>> >
>>
> Simply put, a SAVEPOINT does not allow you to "continue" a transaction
> after an error.
>

​Yes it does.​


> What is does is allow you to commit everything up to the SAVEPOINT.
> Everything after
> the SAVEPOINT is not committed. There is no such thing as ROLLBACK THEN
> CONTINUE,
> which is what I think you are looking for.
>

​SAVEPOINTS do not commit.​

Your comments indicate that you should probably re-read the docs to which
you are referring.  I haven't really used savepoints myself but the docs
describe reasonably well how they function.

https://www.postgresql.org/docs/10/static/sql-rollback-to.html

"The savepoint remains valid and can be rolled back to again later, if
needed."

That alone requires that the main transaction remain in force, that you can
add new commands to it, and that if any of them fail you can ROLLBACK TO
SAVEPOINT again.

Once again, please remember to specify your PostgreSQL version and O/S when
> addressing this forum.
> It helps to clarify solutions for historical purposes.
>
>
​Rarely if ever, which is why pretty much no one but you asks for it unless
it is needed.  You just picked up a habit during your years in tech support
and apply them to your responses on these lists without consideration as
whether it is valid or not.  I'd give you a bit of benefit of the doubt if
you limited your requests to true bug reports, and maybe -performance, but
the vast majority of -general questions do notdepend on knowing the version
and even fewer need to know the O/S.
​
David J.


Re: Information on savepoint requirement within transctions

2018-01-26 Thread David G. Johnston
On Fri, Jan 26, 2018 at 8:57 AM, Robert Zenz 
wrote:

> In PostgreSQL the use of savepoints is
> required:
>
> start transaction
> insert into A
> create savepoint
> insert into B but fail
> rollback to savepoint
> insert into C
> commit
>
> Otherwise the transaction is, after the failed statement, in a state in
> which it
> can not be used anymore. Is that correct?


​Yes.

David J.
​


Re: Information on savepoint requirement within transctions

2018-01-26 Thread Melvin Davidson
>

> As far as I'm aware neither PostgreSQL nor OS version do matter for this


Yes as of this date. However, that is not to say that the SQL standard (or
PostgreSQL) may change
in the _future_, such that there "may" be a "ROLLBACK TO SAVEPOINT OR
CONTINUE" after a failure,
in which case, someone else may be confused because there is no reference
to the actual PostgreSQL version.
That is why it is important to include the version "AT THE TIME OF POSTING"
so that future op's will
have a point of reference for the answer.

On Fri, Jan 26, 2018 at 11:12 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Fri, Jan 26, 2018 at 8:57 AM, Robert Zenz 
> wrote:
>
>> In PostgreSQL the use of savepoints is
>> required:
>>
>> start transaction
>> insert into A
>> create savepoint
>> insert into B but fail
>> rollback to savepoint
>> insert into C
>> commit
>>
>> Otherwise the transaction is, after the failed statement, in a state in
>> which it
>> can not be used anymore. Is that correct?
>
>
> ​Yes.
>
> David J.
> ​
>
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: Information on savepoint requirement within transctions

2018-01-26 Thread David G. Johnston
On Fri, Jan 26, 2018 at 9:27 AM, Melvin Davidson 
wrote:

> >
>
>> As far as I'm aware neither PostgreSQL nor OS version do matter for this
>
>
> Yes as of this date. However, that is not to say that the SQL standard (or
> PostgreSQL) may change
> in the _future_, such that there "may" be a "ROLLBACK TO SAVEPOINT OR
> CONTINUE" after a failure,
> in which case, someone else may be confused because there is no reference
> to the actual PostgreSQL version.
> That is why it is important to include the version "AT THE TIME OF
> POSTING" so that future op's will
> have a point of reference for the answer.
>
>
So preface your answers with: "as of 10.0 this is what I understand"

The timestamp on the email is likely more than sufficient for someone to
lookup the PostgreSQL version should the need arise.

I'm not saying the presence of a version is bad, only the badgering of
people asking questions to provide it when it has no bearing on the answer
to the question being asked at the time it is asked.  You are more than
welcome to meets the needs of future readers by anchoring every response
you choose to make to the version or version you know your answers apply to.

David J.


Re: Information on savepoint requirement within transctions

2018-01-26 Thread Melvin Davidson
> ...the presence of a version is bad, only the badgering of people asking
questions to provide it when it has no bearing on the answer...

Really? Is it that hard for someone to provide version and O/S?

>The timestamp on the email is likely more than sufficient
Do you really think op's look for the timestamp of answers as opposed to
content?
As someone that spent 8 years in high level tech support for Ingres (the
predecessor to Postgres), I assure you
the inclusion of version and O/S is critical for historical purpose.



On Fri, Jan 26, 2018 at 11:41 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Fri, Jan 26, 2018 at 9:27 AM, Melvin Davidson 
> wrote:
>
>> >
>>
>>> As far as I'm aware neither PostgreSQL nor OS version do matter for this
>>
>>
>> Yes as of this date. However, that is not to say that the SQL standard
>> (or PostgreSQL) may change
>> in the _future_, such that there "may" be a "ROLLBACK TO SAVEPOINT OR
>> CONTINUE" after a failure,
>> in which case, someone else may be confused because there is no reference
>> to the actual PostgreSQL version.
>> That is why it is important to include the version "AT THE TIME OF
>> POSTING" so that future op's will
>> have a point of reference for the answer.
>>
>>
> So preface your answers with: "as of 10.0 this is what I understand"
>
> The timestamp on the email is likely more than sufficient for someone to
> lookup the PostgreSQL version should the need arise.
>
> I'm not saying the presence of a version is bad, only the badgering of
> people asking questions to provide it when it has no bearing on the answer
> to the question being asked at the time it is asked.  You are more than
> welcome to meets the needs of future readers by anchoring every response
> you choose to make to the version or version you know your answers apply to.
>
> David J.
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: Information on savepoint requirement within transctions

2018-01-26 Thread David G. Johnston
On Fri, Jan 26, 2018 at 9:47 AM, Melvin Davidson 
wrote:

> > ...the presence of a version is bad, only the badgering of people asking
> questions to provide it when it has no bearing on the answer...
>
> Really? Is it that hard for someone to provide version and O/S?
>

​Its difficult to remember to include it when it should make no difference
in receiving an answer the question being asked today.


> >The timestamp on the email is likely more than sufficient
> Do you really think op's look for the timestamp of answers as opposed to
> content?
>

Maybe not, but in 5 years if someone pulls up this thread and sees that the
OP was talking about 9.6 what are they supposed to do with that
information?  They would have no way to know whether things have changed
between 9.6 and 15.  Its more in their face and they might be more
skeptical but they are still going to try using the information if the
topic seems similar.


> As someone that spent 8 years in high level tech support for Ingres (the
> predecessor to Postgres), I assure you
> the inclusion of version and O/S is critical for historical purpose.
>

​The -general mailing list is not tech support, its a social forum.  -bugs
is tech support and for that list the community does indeed post a request
that version and O/S information be provided, and even has a form-field to
be filled in.

If you feel so strongly then by all means add version and O/S information
to all of your responses.  I'm doubtful it will generally be of benefit and
even less certain that having that information appear on the 3rd email (and
maybe only the third if it gets trimmed away during responses) in the
thread would be an improvement.  One cannot control the initial email and
by the time you ask and give an answer anyway the second one is gone and
the thread may be done with (if one provides sufficient and correct
advice).  Having been given an answer I doubt most people would reply:
"thanks, and for the record my version and O/S is such-and-such".  Besides,
they may still be on 9.3 while the response covers 9.3-10; it is the
version of the response that would seem to matter more than the version the
questioner happens to be using and fails to provide up-front anyway.

David J.


PostgreSQL Restore Database Without Backup and Log

2018-01-26 Thread fabio.silva
Hi!
I had a Ransomware atack and I lost a  lot of file from my server.
I just have the OID folder and the files inside it.
Is it possible restore database using only this data files?
I don't have the folder data, I don't have the folder global, I just have
Datafiles from my old database.
I'm using PostgreSQL 9.4 and Windows Server 2012R2
Thanks!



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



RE: Using Token (JWT) authentication mechanism in Postgres

2018-01-26 Thread Julio Cesar Tenganan Daza
Hi Alexander,

I think we can use PAM authentication with something similar as you did is a 
good mechanism.

And also I have an additional question, can the implemented PAM authentication 
module be used from JDBC connections? Or they works totally apart?

Thank you so much!


Regards,


Cesar

From: Alexander Kukushkin [mailto:cyberd...@gmail.com]
Sent: Thursday, January 25, 2018 3:43 AM
To: Julio Cesar Tenganan Daza 
Cc: pgsql-gene...@postgresql.org
Subject: Re: Using Token (JWT) authentication mechanism in Postgres

Hi,


2018-01-24 22:27 GMT+01:00 Julio Cesar Tenganan Daza 
mailto:ctengan...@psl.com.co>>:
Hello,

I would like to know if is possible to use Token (JWT) authentication mechanism 
in Postgres? In order to authenticate users and also authorize access to 
specific tables, This is in a multi-tenant application context where users can 
create their own tables and share it if they want.

Is it possible this authentication mechanism or is there any plugin to achieve 
it?

Postgres can use pam for authentication.
I am not sure that such plugin already exists, but it shouldn't be very hard to 
implement it.

There are a few problems though:
1. JWT token already contains information about username, but you still have to 
provide it (username) when opening connection.
2. Token has to be send as a connection password. Therefor connection must be 
encrypted.
3. Usually JWT tokens are quite big in size, but for example when psql is 
asking you for a password, it thinks that password can't be longer than 100 
characters. And this value is hard-coded. It's possible to overcome this issue 
if you specify your token in PGPASSWORD env variable.

We at Zalando are using JWT tokens to authenticate employees when they are 
accessing postgres databases, but we are not dealing with JWT directly.
We have some OAuth infrastructure in-place, which can validate JWT tokens.
At the end it boiled down to sending http request to tokeninfo service and 
validating its answer.

Source code of PAM module is here: https://github.com/CyberDem0n/pam-oauth2

Basically you can do something similar. Either take pam-oauth2 as a reference 
and add possibility to validate JWT tokens or implement your tokeninfo service.



Thank You for your help!

Regards,

Cesar

Regards,
--
Alexander Kukushkin


Re: A little RULE help?

2018-01-26 Thread Vik Fearing
On 01/26/2018 04:19 PM, Steven Winfield wrote:
> Back to my original attempt at writing an UPDATE rule…
> 
> CREATE RULE rule_test_update AS ON UPDATE TO rule_test_view DO INSTEAD (
> 
>     UPDATE rule_test SET tt = tstzrange(lower(tt),
> CURRENT_TIMESTAMP, '[)') WHERE id = OLD.id;
> 
>     INSERT INTO rule_test (tt, foo, bar) VALUES
> (tstzrange(CURRENT_TIMESTAMP, NULL, '[)'), NEW.foo, NEW.bar) RETURNING *;
> 
> );
> 
> …I wondered whether the pseudo relations NEW and OLD were somehow being
> modified by the first command (the UPDATE), such that the second command
> (INSERT) could not function properly.

No.  It's because your view's WHERE clause is being added to your INSERT
and of course nothing matches, so nothing gets inserted.

> That would fit with what I observe, but I’m not sure how I go about
> proving
EXPLAIN

> or fixing it.

Don't use RULEs.
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support