How to manipulate field in New record

2023-06-07 Thread Lorusso Domenico
Hello,
Looking for a global solution I've write e trigger function that as
optional parameter (argv[0]) receive the name of a specific parameter.

My need is to get the filed from NEW and OLD record manipulate and set back
the field with new value.

Problem 1: how to get and set this field
Problem 2: the field has Composit type


Re: How to manipulate field in New record

2023-06-07 Thread hubert depesz lubaczewski
On Wed, Jun 07, 2023 at 02:12:58PM +0200, Lorusso Domenico wrote:
> Hello,
> Looking for a global solution I've write e trigger function that as
> optional parameter (argv[0]) receive the name of a specific parameter.
> 
> My need is to get the filed from NEW and OLD record manipulate and set back
> the field with new value.
> 
> Problem 1: how to get and set this field
> Problem 2: the field has Composit type

Do this in pl/perl, pl/python, pl/tcl or anything like this.

Or use one of methods shown here:
https://www.depesz.com/2021/04/21/getting-value-from-dynamic-column-in-pl-pgsql-triggers/

depesz




Re: How to manipulate field in New record

2023-06-07 Thread Adrian Klaver

On 6/7/23 05:12, Lorusso Domenico wrote:

Hello,
Looking for a global solution I've write e trigger function that as 
optional parameter (argv[0]) receive the name of a specific parameter.


My need is to get the filed from NEW and OLD record manipulate and set 
back the field with new value.


A new row won't have the OLD RECORD.

Read:

https://www.postgresql.org/docs/current/plpgsql-trigger.html

and look at the examples.



Problem 1: how to get and set this field
Problem 2: the field has Composit type


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: How to manipulate field in New record

2023-06-07 Thread Lorusso Domenico
yes I know, the trigger should be before insert update and delete, and the
function decide what manipulate basing on TG_OP

Il giorno mer 7 giu 2023 alle ore 15:57 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 6/7/23 05:12, Lorusso Domenico wrote:
> > Hello,
> > Looking for a global solution I've write e trigger function that as
> > optional parameter (argv[0]) receive the name of a specific parameter.
> >
> > My need is to get the filed from NEW and OLD record manipulate and set
> > back the field with new value.
>
> A new row won't have the OLD RECORD.
>
> Read:
>
> https://www.postgresql.org/docs/current/plpgsql-trigger.html
>
> and look at the examples.
>
> >
> > Problem 1: how to get and set this field
> > Problem 2: the field has Composit type
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Native Logical Replication Initial Import Qs

2023-06-07 Thread Don Seiler
Good afternoon.

I'm looking at having to move a fleet of PG 12 databases from Ubuntu 18.04
to Ubuntu 22.04. This means crossing the dreaded libc collation change, so
we're looking to have to migrate via pg_dump/restore or logical replication
for the bigger/busier ones. We're also planning to use PG 15 on the
destination (Ubuntu 22.04) side to kill two birds with one stone, as much
as I'd prefer to have minimal moving parts.

On the logical replication front, the concern is with the initial data
import that happens when the subscription is created (by default). I know
that you can tell the subscription to not copy data and instead use pg_dump
and a replication slot snapshot to achieve this manually. However I'm
unable to explain (to myself) why this is better than just having the
subscription do it upon creation. Given that I can create pub/sub sets for
individual tables for parallel operations, I'm curious what advantages
there are in using pg_dump to do this import.

I had been planning to have pg_dump pipe directly into the destination
database via psql. Is this faster than just having the subscription do the
import? I'm curious as to why or not. I know to only use the minimal
indexes required on the destination side (ie identity-related indexes) and
omit other indexes and constraints until after the data is loaded, but that
is true for either method.

Thanks,
Don.

-- 
Don Seiler
www.seiler.us


Re: Native Logical Replication Initial Import Qs

2023-06-07 Thread Jeremy Schneider
On 6/7/23 2:12 PM, Don Seiler wrote:
> On the logical replication front, the concern is with the initial data
> import that happens when the subscription is created (by default). I
> know that you can tell the subscription to not copy data and instead use
> pg_dump and a replication slot snapshot to achieve this manually.
> However I'm unable to explain (to myself) why this is better than just
> having the subscription do it upon creation. Given that I can create
> pub/sub sets for individual tables for parallel operations, I'm curious
> what advantages there are in using pg_dump to do this import.

FWIW, I think the place this feature shines the most is when you can
safely leverage things like storage-level snapshots. Sometimes that
means you can get a copy of a multi-TB database almost instantly if the
storage or filesystem does copy-on-write, for example database lab
(postgres.ai) which uses ZFS.

Another thing I can think of is that while it's true you can create
multiple pub/sub sets, I'm not sure you can reduce the number of sets
later. So if you were concerned about having too many slots doing
decoding on the source, then you might want the flexibility of pg_dump
(or perhaps restoring a backup) to get more parallelism while having
more control over how many slots will be used later.

In your case, the whole setup is hopefully temporary, so maybe these
particular concerns aren't as relevant to you.

This is just what comes to mind... probably there's a few more things
I'm not thinking of and hopefully others will chime in.  :)

-Jeremy


-- 
http://about.me/jeremy_schneider