Re: In which session context is a trigger run?

2018-12-30 Thread Peter J. Holzer
On 2018-12-29 13:01:47 -0800, Adrian Klaver wrote:
> On 12/28/18 11:44 PM, Mitar wrote:
> > On Fri, Dec 28, 2018 at 9:36 PM Adrian Klaver  
> > wrote:
> > > When you create the temporary function it is 'pinned' to a particular
> > > session/pg_temp_nn. Running the trigger in another session 'pins' it to
> > > that session and it is not able to see the posts_temp table in the
> > > original session.
> > 
> > Yes. But why is trigger run in that other session? Could there be a
> > way to get trigger to run in the session where it was declared?
> 
> Because it is where the temporary table is declared that is important.

Is there a "not" missing in this sentence? Otherwise I don't understand
what you mean and suspect you have have misunderstood what Mitar wants.

As I understand it, what Mitar wants can't work because it clashes with
the concepts of "sessions" and "transactions".

Each session executes transactions sequentially, and the changes
effected by any transaction become visible to other sessions only after
the transaction committed.

If I understood Mitar correctly he wants the trigger to execute in the
session where it was declared, not in the sessio where the statement was
executed that triggered the trigger.

So we have two sessions A and B. there is a permanent table P and a
temporary table T in session A. The trigger on P with a temporary
function) was declared in session A, and we execute an insert statement
in session B.

Assuming session A is currently idle (otherwise we would have to block
until the current transaction in A commits or rolls back), we start a
new transaction in A which executes the trigger. This would see the
temporary table in session A.

But since the transaction in session B hasn't yet committed, it wouldn't
see the data that the insert statement has just inserted. Since the
point of an after insert trigger is usually to do something with this
new data, that would make the trigger useless.

hp


-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Recursive CTE

2018-12-30 Thread Andy Colson

On 12/29/18 12:34 PM, Glenn Schultz wrote:

All,
Following my earlier post on variable instantiation, I rethought how I was 
working with dates and realized I can fix the date and use static interval.  I 
came up with this recursive CTE which is the end goal.  However, the problem is 
that the convexity query cannot be used as a subquery.  So I think I need to 
use a join of convexity on the original query - not sure I am little stuck at 
this point but I feel I am close.  Any help would be appreciated.

-Glenn


If you are trying to find month summaries, then maybe date_trunc would be 
easier:

select date_trunc('month', fctrdt),
  round(smmtocpr(cast(sum(currrpb * case when fnmloan.fctrdt = fnmloan_data.fctrdt then 
fnmloan_data.event else 0 end)/ sum(currrpb) as numeric) * 100),4) * 100 as "CPR 
3mo"
from fnmloan
group by date_trunc('month', fctrdt)





Re: getting pg_basebackup to use remote destination

2018-12-30 Thread Peter Eisentraut
On 29/12/2018 20:04, Chuck Martin wrote:
> I thought I knew how to do this, but I apparently don't. I have to set
> up a new server as a standby for a PG 11.1 server. The main server has a
> lot more resources than the standby. What I want to do is run
> pg_basebackup on the main server with the output going to the data
> directory on the new server. But when I give this command:
> 
> pg_basebackup -D "ssh root@10.0.1.16:/mnt/dbraid/data" -P -v -X s
> 
> 
> it instead writes to my root drive which doesn't have the space, so it
> fails and deletes the partial backup. 

What you might be thinking of is the "old" method of doing base backups
before pg_basebackup:  Call pg_start_backup() and then do file system
operations (tar, scp, whatever) to move the data files to where you want
them.  This is mostly obsolete.  You should run pg_basebackup on the
host where you want to set up your standby.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: getting pg_basebackup to use remote destination

2018-12-30 Thread Chuck Martin
On Sun, Dec 30, 2018 at 11:20 AM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 29/12/2018 20:04, Chuck Martin wrote:
> > I thought I knew how to do this, but I apparently don't. I have to set
> > up a new server as a standby for a PG 11.1 server. The main server has a
> > lot more resources than the standby. What I want to do is run
> > pg_basebackup on the main server with the output going to the data
> > directory on the new server. But when I give this command:
> >
> > pg_basebackup -D "ssh root@10.0.1.16:/mnt/dbraid/data" -P -v -X s
> >
> >
> > it instead writes to my root drive which doesn't have the space, so it
> > fails and deletes the partial backup.
>
> What you might be thinking of is the "old" method of doing base backups
> before pg_basebackup:  Call pg_start_backup() and then do file system
> operations (tar, scp, whatever) to move the data files to where you want
> them.  This is mostly obsolete.  You should run pg_basebackup on the
> host where you want to set up your standby


Thanks. It’s been a while since I set up replication. Not to mention
several Postgres versions. I’ve started pg_basebackup from the standby. It
failed once due to an ssh error, but I reloaded sshd and started again. May
take a while. It about 750gb.

> .
>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
> --
Chuck Martin
Avondale Software


Re: In which session context is a trigger run?

2018-12-30 Thread Adrian Klaver

On 12/30/18 3:08 AM, Peter J. Holzer wrote:

On 2018-12-29 13:01:47 -0800, Adrian Klaver wrote:

On 12/28/18 11:44 PM, Mitar wrote:

On Fri, Dec 28, 2018 at 9:36 PM Adrian Klaver  wrote:

When you create the temporary function it is 'pinned' to a particular
session/pg_temp_nn. Running the trigger in another session 'pins' it to
that session and it is not able to see the posts_temp table in the
original session.


Yes. But why is trigger run in that other session? Could there be a
way to get trigger to run in the session where it was declared?


Because it is where the temporary table is declared that is important.


Is there a "not" missing in this sentence? Otherwise I don't understand
what you mean and suspect you have have misunderstood what Mitar wants.


I will agree I have no idea what Mitar wants, as the examples to date 
have not included enough information. That is why I asked for more 
information.




As I understand it, what Mitar wants can't work because it clashes with
the concepts of "sessions" and "transactions".


Hence the links to the sections of the documentation that explain that, 
on the assumption that might help.




Each session executes transactions sequentially, and the changes
effected by any transaction become visible to other sessions only after
the transaction committed.

If I understood Mitar correctly he wants the trigger to execute in the
session where it was declared, not in the sessio where the statement was
executed that triggered the trigger.


There is the additional hitch that the trigger is being declared to use 
a temporary function that is defined in an alias schema pg_temp.




So we have two sessions A and B. there is a permanent table P and a
temporary table T in session A. The trigger on P with a temporary
function) was declared in session A, and we execute an insert statement
in session B.

Assuming session A is currently idle (otherwise we would have to block
until the current transaction in A commits or rolls back), we start a
new transaction in A which executes the trigger. This would see the
temporary table in session A.

But since the transaction in session B hasn't yet committed, it wouldn't
see the data that the insert statement has just inserted. Since the
point of an after insert trigger is usually to do something with this
new data, that would make the trigger useless.

 hp





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: getting pg_basebackup to use remote destination

2018-12-30 Thread Francisco Olarte
On Sun, Dec 30, 2018 at 5:52 PM Chuck Martin  wrote:
..
> I’ve started pg_basebackup from the standby. It failed once due to an ssh 
> error, but I reloaded sshd and started again. May take a while. It about 
> 750gb.

IIRC pg_base backup uses the postgres connection, so I supose ssh
failed on the terminal session you were using to run it. If this is
your problem you could ( should, IMO ) install something like tmux /
screen which is really a life saver for this cases. ( I'll recommend
tmux if you've never used one of them, they are terminal multiplexers
which allow you to open a session, runs some terminal sessions under
it and connect/disconnect/reconnect to them, and specially when your
ssh barfs they just disconnect the session, so you just ssh again and
reconnect. They are great when you have long running termina commands
like pg_basebackup / pg_dump, I routinely use them to launch a long
running process and just reconnect periodically to check on it ).

Francisco Olarte.



Re: getting pg_basebackup to use remote destination

2018-12-30 Thread Jeff Janes
On Sat, Dec 29, 2018 at 2:05 PM Chuck Martin 
wrote:

> I thought I knew how to do this, but I apparently don't. I have to set up
> a new server as a standby for a PG 11.1 server. The main server has a lot
> more resources than the standby. What I want to do is run pg_basebackup on
> the main server with the output going to the data directory on the new
> server.
>

pg_basebackup consumes few resources on the standby anyway in the mode you
are running it, other than network and disk.  And those are inevitable
given your end goal, so if you could do what you want, I think it still
wouldn't do what you want.

If you really want to spare the network, you can run compression on the
server side then decompress on the standby.  Currently you can't compress
on the server when invoking it on the standby, so:

pg_basebackup -D - -Ft -X none |pxz | ssh 10.0.1.16 "tar -xJf - -C
/somewhere/data_test"

Unfortunately you can't use this along with -X stream or -X fetch.

Really I would probably compress to a file and then use scp/rsync, rather
the streaming into ssh.  That way if ssh gets interrupted, you don't lose
all the work.

Cheers,

Jeff

>


jsonb : find row by array object attribute

2018-12-30 Thread Rory Campbell-Lange
With a table like this:

  Table "public.x"
 Column | Type  | Modifiers 
+---+---
 j  | jsonb | 

and data like this:
j 
--
 {"a": 1, "people": [{"id": 2002}, {"id": 2004}]}
 {"a": 3, "people": [{"id": 2003}, {"id": 2005}]}
(2 rows)

I'd like to be able to find any row with a particular people id
attribute.

I can do it explitly like this:

select * from x where j->'people'->0->'id' = '2003'::jsonb;

j 
--
 {"a": 3, "people": [{"id": 2003}, {"id": 2005}]}
(1 row)

but that doesn't help if I need to find if any person matches the id
attribute I'm looking for.

I can get part of the way by searching like this:

=> select * from (
select jsonb_array_elements(j #>'{people}') as jae from x
) y
where jae->'id' = '2002'::jsonb;

 jae  
--
 {"id": 2002}
(1 row)

but I can't work out how to return the whole row containing a desired
people id value.

Thanks for any help
Rory






Re: jsonb : find row by array object attribute

2018-12-30 Thread Tom Lane
Rory Campbell-Lange  writes:
> ... I can't work out how to return the whole row containing a desired
> people id value.

Something like this, maybe?

=# select * from x where
  '2003'::jsonb in (select jsonb_array_elements(j #>'{people}')->'id');
j 
--
 {"a": 3, "people": [{"id": 2003}, {"id": 2005}]}
(1 row)

It's not too efficient though :-(

regards, tom lane



Re: jsonb : find row by array object attribute

2018-12-30 Thread Andrew Gierth
> "Rory" == Rory Campbell-Lange  writes:

 Rory> and data like this:
 Rory> j 
 Rory> --
 Rory>  {"a": 1, "people": [{"id": 2002}, {"id": 2004}]}
 Rory>  {"a": 3, "people": [{"id": 2003}, {"id": 2005}]}
 Rory> (2 rows)

 Rory> I'd like to be able to find any row with a particular people id
 Rory> attribute.

where j @> '{"people":[{"id":2003}]}'

(meaning: j contains a key "people" whose value is an array containing
an element {"id":2003})

Since @> can use GIN indexes, you should usually try and find a search
condition using it before resorting to playing with -> or expanding out
array values.

-- 
Andrew (irc:RhodiumToad)



Re: jsonb : find row by array object attribute

2018-12-30 Thread Andrew Gierth
> "Andrew" == Andrew Gierth  writes:
> "Rory" == Rory Campbell-Lange  writes:

 Rory> and data like this:
 Rory> j 
 Rory> --
 Rory> {"a": 1, "people": [{"id": 2002}, {"id": 2004}]}
 Rory> {"a": 3, "people": [{"id": 2003}, {"id": 2005}]}
 Rory> (2 rows)

 Rory> I'd like to be able to find any row with a particular people id
 Rory> attribute.

 Andrew> where j @> '{"people":[{"id":2003}]}'

 Andrew> (meaning: j contains a key "people" whose value is an array
 Andrew> containing an element {"id":2003})

or to be more precise: j is an object containing a key "people" whose
value is an array containing an element which is an object containing a
key "id" with value 2003

i.e. {"a":3, "people": [{"id":2003,"blah":123},{"id":2004}]}  would
match the condition too.

-- 
Andrew (irc:RhodiumToad)



Re: getting pg_basebackup to use remote destination

2018-12-30 Thread Chuck Martin
Maybe I need to rethink ths and take Jeff's advice. I executed this:

pg_basebackup -h [main server's URL] -U postgres -P -v -X s -D
/mnt/dbraid/data

8 hours ago, and it is now still at 1%. Should it be that slow? The
database in question is about 750 GB, and both servers are on the same GB
ethernet network.

Chuck Martin
Avondale Software


On Sun, Dec 30, 2018 at 3:28 PM Jeff Janes  wrote:

> On Sat, Dec 29, 2018 at 2:05 PM Chuck Martin 
> wrote:
>
>> I thought I knew how to do this, but I apparently don't. I have to set up
>> a new server as a standby for a PG 11.1 server. The main server has a lot
>> more resources than the standby. What I want to do is run pg_basebackup on
>> the main server with the output going to the data directory on the new
>> server.
>>
>
> pg_basebackup consumes few resources on the standby anyway in the mode you
> are running it, other than network and disk.  And those are inevitable
> given your end goal, so if you could do what you want, I think it still
> wouldn't do what you want.
>
> If you really want to spare the network, you can run compression on the
> server side then decompress on the standby.  Currently you can't compress
> on the server when invoking it on the standby, so:
>
> pg_basebackup -D - -Ft -X none |pxz | ssh 10.0.1.16 "tar -xJf - -C
> /somewhere/data_test"
>
> Unfortunately you can't use this along with -X stream or -X fetch.
>
> Really I would probably compress to a file and then use scp/rsync, rather
> the streaming into ssh.  That way if ssh gets interrupted, you don't lose
> all the work.
>
> Cheers,
>
> Jeff
>
>>


In which session context is a trigger run?

2018-12-30 Thread David G. Johnston
On Friday, December 28, 2018, Mitar  wrote:
>
> On Fri, Dec 28, 2018 at 9:36 PM Adrian Klaver 
> wrote:
> > When you create the temporary function it is 'pinned' to a particular
> > session/pg_temp_nn. Running the trigger in another session 'pins' it to
> > that session and it is not able to see the posts_temp table in the
> > original session.
>
> Yes. But why is trigger run in that other session?


Because that is the most obvious behavior.  Triggers are assumed to outlive
their creating session 99.9% of the time, or so.


> Could there be a

way to get trigger to run in the session where it was declared?
>

Doesn’t seem likely.  Maybe you can use NOTIFY/LISTEN in your temporary
triggers and have your active client perform the work after being notified.

What you want is somewhat novel.  It’s worth considering whether it’s
uncommon because it’s ingenious or because other techniques are more
appropriate.

David J.


Re: ERROR: found multixact XX from before relminmxid YY

2018-12-30 Thread Andres Freund
Hi,

On 2018-12-28 19:49:36 -0500, Tom Lane wrote:
> Mark Fletcher  writes:
> > Starting yesterday morning, auto vacuuming of one of our postgresql 9.6.10
> > (CentOS 7) table's started failing:
> > ERROR:  found multixact 370350365 from before relminmxid 765860874
> > CONTEXT:  automatic vacuum of table "userdb.public.subs"
>
> Ugh.
>
> > Reading the various discussions about this error, the only solution I found
> > was here:
> > https://www.postgresql.org/message-id/CAGewt-ukbL6WL8cc-G%2BiN9AVvmMQkhA9i2TKP4-6wJr6YOQkzA%40mail.gmail.com
> > But no other reports of this solving the problem. Can someone verify that
> > if I do the mentioned fix (and I assume upgrade to 9.6.11) that will fix
> > the problem? And that it doesn't indicate table corruption?
>
> Yeah, SELECT FOR UPDATE should overwrite the broken xmax value and thereby
> fix it, I expect.

Right.

> However, I don't see anything in the release notes
> suggesting that we've fixed any related bugs since 9.6.10, so if this
> just appeared then we've still got a problem :-(.  Did anything
> interesting happen since your last successful autovacuum on that table?
> Database crashes, WAL-related parameter changes, that sort of thing?

I think it's entirely conceivable that the damage happened with earlier 
versions,
and just became visible now as the global horizon increased.

Greetings,

Andres Freund



Re: logical replication resiliency

2018-12-30 Thread Achilleas Mantzios

On 23/12/18 3:39 μ.μ., Istvan Soos wrote:

I have a database with about 1 TB of data, mostly inserts, little
updates and deletes, and I want to setup 2 logical replication for it.
What should be my expectation with the replication?

1. What is the time-window that either of the nodes can be offline?
- Will it survive if the node doing the publishing is offline for a
few hours and then goes back online?

No problem with that.

- Will it survive if the subscriber node is offline for a few hours
and then goes back online?
There is a problem with that. Due to the replication slot, the subscriber will start keeping WALs waiting for the subscriber to wake up. It depends on the amount of WAL produced, peak time or not, and 
any other special situations (business) that could influence the amount of writes to the DB. You can easily find out by looking at your archive. Note that once the space dedicated for WALs is filled 
up, your publisher node will PANIC.


2. If the replication is aborted for whatever reason, is there a way
to do it incrementally, or will it re-sync the 1TB at every such
event?

It is done incrementally, provided there is WAL continuity.


3. What alternative tools are out there to e.g. periodically sync from
one db to another?
PgSQL replication solutions in various forms, sizes and shapes are around since 2001. So plenty of tools out there + your custom scripts that you can always write. But since logical replication is in 
the core (starting from pgsql 10.0) I see no reason not to use it.


You can have a look at an article I wrote here : 
https://severalnines.com/blog/using-postgresql-logical-replication-maintain-always-date-readwrite-test-server



Thanks,
   Istvan




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt