Re: [EXTERNAL] Re: PostgreSQL-12 replication failover, pg_rewind fails

2020-05-13 Thread Mariya Rampurawala
Thank you Kyotaro and Laurenz for your quick responses.
This helped me get my setup working.

Regards,
Mariya

On 13/05/20, 6:51 AM, "Kyotaro Horiguchi"  wrote:

At Tue, 12 May 2020 17:29:50 +0200, Laurenz Albe  
wrote in 
> On Tue, 2020-05-12 at 09:40 +, Mariya Rampurawala wrote:
> > > but if the target cluster ran for a long time after the divergence,
> > > the old WAL files might no longer be present. In that case, they can
> > > be manually copied from the WAL archive to the pg_wal directory, or
> > > fetched on startup by configuring primary_conninfo or restore_command.
> > 
> > I hit this issue every time I follow the aforementioned steps, manually 
as well as with scripts.
> > How long is "long time after divergence"? Is there a way I can make some
> > configuration changes so that I donʼt hit this issue?
> > Is there anything I must change in my restore command?

As mentioned in the documentation, pg_rewind uses the WAL records
startng from the last checkpoint just before the divergence point. The
divergence point is shown as the follows in your log messages.

> pg_rewind: servers diverged at WAL location 6/B9D8 on timeline 53

pg_rewind scans backward starting from that location to find a
checkpoint record, which is the oldest WAL record pg_rewind needs.  As
you see it is not a matter of calculation.  There's no other way than
actually restoring WAL segment files to read and try finding.

> What you can do is to use a higher value for "wal_keep_segments".
> Then PostgreSQL will keep around that number of old WAL segments,
> which increases the chance for "pg_rewind" to succeed.

So this is one effective way to reduce the chance to lose required WAL
(segment) files.

On PG12, an easy way to automatically restore all required WAL files
would be restoring the WAL file every time pg_rewind complains that it
is missing.

Or, you could use pg_waldump to find a checkpoint record.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




what is the best way to access cold data on another server?

2020-05-13 Thread Amine Tengilimoglu
Hello everyone;

In addition to the actively used data, there are other data that are
very little accessed. I want to separate the less used data to save space
and reduce maintenance operations. I want to use a separate postgres
instance for this. Now the question is; How do I link these two instances?
postgres_fdw or dblink? or what other solution? Which one would be better?



thanks in advance..


Re: [EXTERNAL] Re: PostgreSQL-12 replication failover, pg_rewind fails

2020-05-13 Thread Michael Paquier
On Wed, May 13, 2020 at 04:58:15AM +, Mariya Rampurawala wrote:
> Thank you Kyotaro and Laurenz for your quick responses.
> This helped me get my setup working.

Please note that we have added in Postgres 13 the possibility to use a
restore_command when using pg_rewind if the parameter is set in the
target cluster's configuration.  This way, you can fetch missing WAL
segments from archives during the rewind operation without the need to
rely on wal_keep_segments or such. 
--
Michael


signature.asc
Description: PGP signature


Column reset all values

2020-05-13 Thread otar shavadze
postgres version 12
I have very simple update query, like this:

update my_table
set
col = NULL
where
col IS NOT NULL;

my_table contains few million rows, col is indexed column

Fastest way would be   alter table, drop column and then add column again,
but I can't do this just because on this column   depends bunch of views
and materialized views.

No much hope, but still asking, Is there some another way to just reset
column all values? ( with NULL in my case)


Re: Column reset all values

2020-05-13 Thread Peter J. Holzer
On 2020-05-13 12:13:20 +0400, otar shavadze wrote:
> postgres version 12
> I have very simple update query, like this:
> 
> update my_table 
> set 
> col = NULL
> where
> col IS NOT NULL;
> 
> my_table contains few million rows, col is indexed column

You might want to drop the index before doing this. You obviously won't
need the index afterwards and the database may be able to use HOT
updates if there is no index on the column (but that depends on the
amount of unused space in each block).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Column reset all values

2020-05-13 Thread Olivier Gautherot
Hi Otar,

On Wed, May 13, 2020 at 10:15 AM otar shavadze  wrote:

> postgres version 12
> I have very simple update query, like this:
>
> update my_table
> set
> col = NULL
> where
> col IS NOT NULL;
>
> my_table contains few million rows, col is indexed column
>
> Fastest way would be   alter table, drop column and then add column again,
> but I can't do this just because on this column   depends bunch of views
> and materialized views.
>
> No much hope, but still asking, Is there some another way to just reset
> column all values? ( with NULL in my case)
>

If views depend on this column, you may need to drop them (in the right
order...) and then recreate them. Now, if they depend on a column that will
not contain significant data, you may wish to remove the column, or declare
it as null if you need to maintain compatibility.

Now, if you have time and down time of the database is an issue, you may
run the UPDATE on lots of 1000 rows (or whatever that number fits you).
UPDATE is typically a INSERT/DELETE/VACUUM sequence and this copying around
is the killer - doing it in one go can temporarily increase the disk usage.
I've had success with the following pseudo code:

SELECT rowid FROM mytable WHERE col IS NOT NULL

and fed the result to something like:

FOR chunk IN chunk_in_1000_rows(query_result)
DO
BEGIN
UPDATE my_table SET col = NULL WHERE rowid IN chunk
COMMIT
SLEEP(5)
DONE

You may wish to run a VACUUM FULL manually at the end.

In my case, I had to compute individual numbers so the processing was a bit
more complex but it happily processed over 60 millions rows in a few days.

Hope it helps
--
Olivier Gautherot


Re: ESQL/C: a ROLLBACK rolls back a COMMITED transaction

2020-05-13 Thread Matthias Apitz


I finally can reproduce the issue with a small ESQL/C written program
for that purpose. I could attach here the source, but even seeing its
printouts, all is perhaps clear:

the pgm does an INSERT, after this the row is there and can be seen
with SELECT; than I CLOSE a non existing cursor, which rolls back
the INSERTed data:

./embedded
tstint: 11073
INSERT done
SELECT done
SELECT: tstint: 11073 tstchar25: [hello]
CLOSE "foo_bar" done
SQL error: cursor "foo_bar" does not exist on line 57
SQL error: current transaction is aborted, commands ignored until end of 
transaction block on line 61
SELECT done
SELECT: tstint: 0 tstchar25: []
COMMIT done
SELECT done
SELECT: tstint: 0 tstchar25: []
ROLLBACK done
SELECT done
SELECT: tstint: 0 tstchar25: []

i.e. not the ROLLBACK removes the data, but the CLOSE of non existing
CURSOR. 

We have in our huge application server and its DB-layer places where we close
in advance a CURSOR to be sure that its CREATE will not cause any
problem because it is existing. Until yesterday we thought that the
raised -400 error, like

[1471] [12.05.2020 15:48:50:477]: ecpg_check_PQresult on line 939: bad response 
- ERROR:  cursor "adm_partab_seq" does not exist
[1471] [12.05.2020 15:48:50:477]: raising sqlstate 34000 (sqlcode -400): cursor 
"adm_partab_seq" does not exist on line 939

could be overcome with the COMMIT without loosing the inserted data.

Main question: How can we ask the PostgreSQL server if a CURSOR 'foo_bar' 
(still) does exist or not?

Thanks

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub




Re: ESQL/C: a ROLLBACK rolls back a COMMITED transaction

2020-05-13 Thread Ravi Krishna




the pgm does an INSERT, after this the row is there and can be seen
with SELECT; than I CLOSE a non existing cursor, which rolls back
the INSERTed data:



I have not done coding in ESQL/C in a long time, but shouldn't that be 
expected as any error should trigger a rollback.







Why is it that “aggregate functions are not allowed in FROM clause of their own query level”?

2020-05-13 Thread Tom Ellis
Hello,

The code under 1 gives me the error message "aggregate functions are
not allowed in FROM clause of their own query level" whereas the code
under 2 is permitted. Unless I am much mistaken the latter is
equivalent to the former because it just makes a new "local" name for
`v`.

I'm writing a code generator and rewriting the former, natural form,
to the latter, artificial form, is somewhat fiddly, but ultimately
possible, I believe.

I have a few questions:

A. Am I right in thinking that the two forms are equivalent?

A1. And am I right to presume that it's always possibly to rewrite
more complicated examples that yield the same error to valid versions,
just by coming up with a local name for the problematic fields?

B. If they are indeed equivalent, what is the rationale for forbidding
the former?  It seems like it would be more convenient to allow users
to write the former form.


1. Causes error:

SELECT * FROM
((SELECT 0 as "v") as "T1"
  INNER JOIN LATERAL
 (SELECT
  SUM("v") as "r"
  FROM (SELECT 0) as "T1") as "T2"
  ON TRUE) as "T1"

2. Runs successfully

SELECT * FROM
((SELECT 0 as "v") as "T1"
  INNER JOIN LATERAL
 (SELECT
  SUM("v_again") as "r"
  FROM (SELECT "v" as "v_again") as "T1") as "T2"
  ON TRUE) as "T1"


By the way, the only reference to this issue that I can find on the
entire internet is the following old  mailing list post:

https://www.postgresql.org/message-id/1375925710.17807.13.camel%40vanquo.pezone.net

I also posted on DBA.StackExchange
https://dba.stackexchange.com/questions/266988/why-is-it-that-aggregate-functions-are-not-allowed-in-from-clause-of-their-own

Thanks,

Tom




Re: what is the best way to access cold data on another server?

2020-05-13 Thread Laurenz Albe
On Wed, 2020-05-13 at 10:12 +0300, Amine Tengilimoglu wrote:
> In addition to the actively used data, there are other data that are very 
> little accessed.
> I want to separate the less used data to save space and reduce maintenance 
> operations.
> I want to use a separate postgres instance for this. Now the question is; How 
> do I link
> these two instances? postgres_fdw or dblink? or what other solution? Which 
> one would be better?

Foreign Data Wrapper is a good option, and yes, it is secure.

Another option would be to store these data as CSV files and use file_fdw
to define them as "external tables".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Reuse an existing slot with a new initdb

2020-05-13 Thread Support



On 5/8/2020 1:51 PM, Support wrote:

I normalized my replislots with the name of my nodes.
I have 2 options in my recovery script that tries first pg_basebackup
to recover and sync the hot standby, but unfortunately big DB fails 
sometimes due
to very slow or unstable network. So my second option is to completely 
make a new inidb and import an sql file from pg_dumpall master as it 
takes less bytes once compressed. But I'm facing an issue with the 
slot complaining (obviously) about the ho standby node that does not 
match the slot identifier. So my question is simple, is there a way to 
reinitialize the slot to not renew the identifier with the new hot 
standby initdb?


Thanks

David


No one has an answer to my question?
thanks!

David




RE: Enforcing uniqueness on [real estate/postal] addresses

2020-05-13 Thread Basques, Bob (CI-StPaul)
I've been following this thread with some interest.

Was wondering if you ever thought about binding the textual address to a USNG 
location. https://usngcenter.org/

You can easily add individual locations within something like a farm field with 
as few as eight  unique digits that would identify each 10 meter square.  I'm 
oversimplifying my response, but once you look through how the USNG works, 
you'll see the benefit for using it as a address/location uniqueness enforcing 
tool. 

It would easily allow for locating many different locations inside of a larger 
addressed location, as well as non, addressed locations.  The USNG location can 
be thought of as a unique address unto itself, and works  across the planet.  
No two are alike.



Bobb



-Original Message-
From: Peter Devoy  
Sent: Tuesday, May 12, 2020 3:56 PM
To: Peter J. Holzer 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Enforcing uniqueness on [real estate/postal] addresses

Think Before You Click: This email originated outside our organization.


>Is is possible to have two entries which have the same 
>address_identifier_general, street and postcode, but different 
>descriptions?

Unfortunately, yes.  The data comes from gov't systems to regulate the 
development/alteration of arbitrary pieces of property and those pieces do not 
always have a postal address.  E.g. a farmer may one year apply to erect a wind 
turbine in "field north of Foo Cottage"
and the next year apply to demolish "barnhouse west of Foo Cottage".

Now, I know what you are thinking, there is a normalization opportunity and you 
may well be right. However, the problem does exist in some of the other fields 
too and I am already facing a fair amount of join complexity in my schema so I 
am trying to figure out my options :)

>(What is an address_identifier_general, btw?)
Address identifier composed by numbers and/or characters. I'm using the 
terminology from the EU's "INSPIRE Data Specification on Addresses" Guidelines.

I haven't yet had the opportunity to try out the above suggestions but I will 
post again when I have.




Re: Why is it that “aggregate functions are not allowed in FROM clause of their own query level”?

2020-05-13 Thread David G. Johnston
On Wednesday, May 13, 2020, Tom Ellis 
wrote:

> Hello,
>
> The code under 1 gives me the error message "aggregate functions are
> not allowed in FROM clause of their own query level" whereas the code
> under 2 is permitted. Unless I am much mistaken the latter is
> equivalent to the former because it just makes a new "local" name for
> `v`.
>

https://www.postgresql.org/docs/12/sql-select.html

Because step 2 precedes step 4.


> A. Am I right in thinking that the two forms are equivalent?


In so far as if the first one could be executed it would provide the same
result, yes...i think


>
> A1. And am I right to presume that it's always possibly to rewrite
> more complicated examples that yield the same error to valid versions,
> just by coming up with a local name for the problematic fields?


Don’t feel like figuring out a counter-example, your given example is not
compelling enough


>
> B. If they are indeed equivalent, what is the rationale for forbidding
> the former?  It seems like it would be more convenient to allow users
> to write the former form.


It wasn’t directly intentional.  Lateral came long after from/group by.
But since it extends the from clause the processing order puts it before
aggregation.


> 1. Causes error:
>
> SELECT * FROM
> ((SELECT 0 as "v") as "T1"
>   INNER JOIN LATERAL
>  (SELECT
>   SUM("v") as "r"
>   FROM (SELECT 0) as "T1") as "T2"
>   ON TRUE) as "T1"
>
> 2. Runs successfully
>
> SELECT * FROM
> ((SELECT 0 as "v") as "T1"
>   INNER JOIN LATERAL
>  (SELECT
>   SUM("v_again") as "r"
>   FROM (SELECT "v" as "v_again") as "T1") as "T2"
>   ON TRUE) as "T1"
>
>
> By the way, the only reference to this issue that I can find on the
> entire internet is the following old  mailing list post:
>
> https://www.postgresql.org/message-id/1375925710.17807.
> 13.camel%40vanquo.pezone.net


Yeah, because usually one just writes your example:

Select sum(“v”) from (select 0 as “v”) as t1 group by “v”;


> I also posted on DBA.StackExchange
> https://dba.stackexchange.com/questions/266988/why-is-it-
> that-aggregate-functions-are-not-allowed-in-from-clause-of-their-own
>
>
I did not.

David J.


RE: Enforcing uniqueness on [real estate/postal] addresses

2020-05-13 Thread Rich Shepard

On Wed, 13 May 2020, Basques, Bob (CI-StPaul) wrote:


Was wondering if you ever thought about binding the textual address to a
USNG location. https://usngcenter.org/


Bobb, et al.:

Why not have a 'parent' table with entries such as 'Foo Farm' and a 'child'
table with rows for sub-parts of the parent; e.g., 'Barn', 'Field'?

Rich




Re: ESQL/C: a ROLLBACK rolls back a COMMITED transaction

2020-05-13 Thread Tom Lane
Matthias Apitz  writes:
> El día Dienstag, Mai 12, 2020 a las 08:01:15 -0400, Tom Lane escribió:
>> Not an unreasonable suggestion, but it'd be more likely to happen if
>> you send in a patch ;-).

> as the first argument to ECPGdo() is of type int we can not do a hack
> like "__LINE__:"__FILE__ (i.e. concatenating line number and filename
> into one string. We have to change the call interface function ECPGdo()
> and add a string argument.

Yeah, now that you mention that, this was discussed before.  We cannot
change the signature of ECPGdo as that would break existing client
binaries.  We could leave it alone (and unused by newer compilations)
if we add a new entry point, though.  "ECPGnewdo()", anybody?

regards, tom lane




RE: Enforcing uniqueness on [real estate/postal] addresses

2020-05-13 Thread Basques, Bob (CI-StPaul)
Yup, that's where I was going with the USNG use.  

Separate dataset bound to addresses where needed/desired for detailed locating. 
 And where no address is present, the USNG becomes the address.

Bobb



-Original Message-
From: Rich Shepard  
Sent: Wednesday, May 13, 2020 9:01 AM
To: pgsql-general@lists.postgresql.org
Subject: RE: Enforcing uniqueness on [real estate/postal] addresses

Think Before You Click: This email originated outside our organization.


On Wed, 13 May 2020, Basques, Bob (CI-StPaul) wrote:

> Was wondering if you ever thought about binding the textual address to 
> a USNG location. https://usngcenter.org/

Bobb, et al.:

Why not have a 'parent' table with entries such as 'Foo Farm' and a 'child'
table with rows for sub-parts of the parent; e.g., 'Barn', 'Field'?

Rich






Create user mapping for role

2020-05-13 Thread Gerard Weatherby
Can a foreign data wrapper user mapping by created for multiple users via a 
role? i.e. if bob, sally, and john belong to role foreigusers, is it possible 
to do something like
Create user mapping for foreignusers server …





Re: Enforcing uniqueness on [real estate/postal] addresses

2020-05-13 Thread David Goodenough
There is also What3Words.com, which give a three word name to each 3m square 
over the 
world.  Longer that USNG but easier to remember/type/say.

David

On Wednesday, 13 May 2020 14:33:30 BST Basques, Bob (CI-StPaul) wrote:
> I've been following this thread with some interest.
> 
> Was wondering if you ever thought about binding the textual address to a
> USNG location. https://usngcenter.org/
 
> You can easily add individual locations within something like a farm field
> with as few as eight  unique digits that would identify each 10 meter
> square.  I'm oversimplifying my response, but once you look through how the
> USNG works, you'll see the benefit for using it as a address/location
> uniqueness enforcing tool. 
 
> It would easily allow for locating many different locations inside of a
> larger addressed location, as well as non, addressed locations.  The USNG
> location can be thought of as a unique address unto itself, and works 
> across the planet.  No two are alike.
 
> 
> 
> Bobb
> 
> 
> 
> -Original Message-
> From: Peter Devoy  
> Sent: Tuesday, May 12, 2020 3:56 PM
> To: Peter J. Holzer 
> Cc: pgsql-general@lists.postgresql.org
> Subject: Re: Enforcing uniqueness on [real estate/postal] addresses
> 
> Think Before You Click: This email originated outside our organization.
> 
> 
> 
> >Is is possible to have two entries which have the same 
> >address_identifier_general, street and postcode, but different 
> >descriptions?
> 
> 
> Unfortunately, yes.  The data comes from gov't systems to regulate the
> development/alteration of arbitrary pieces of property and those pieces do
> not always have a postal address.  E.g. a farmer may one year apply to
> erect a wind turbine in "field north of Foo Cottage"
 and the next year
> apply to demolish "barnhouse west of Foo Cottage". 
> Now, I know what you are thinking, there is a normalization opportunity and
> you may well be right. However, the problem does exist in some of the other
> fields too and I am already facing a fair amount of join complexity in my
> schema so I am trying to figure out my options :)
 
> 
> >(What is an address_identifier_general, btw?)
> 
> Address identifier composed by numbers and/or characters. I'm using the
> terminology from the EU's "INSPIRE Data Specification on Addresses"
> Guidelines.
 
> I haven't yet had the opportunity to try out the above suggestions but I
> will post again when I have.
 
> 




Re: Create user mapping for role

2020-05-13 Thread Tom Lane
Gerard Weatherby  writes:
> Can a foreign data wrapper user mapping by created for multiple users via a 
> role? i.e. if bob, sally, and john belong to role foreigusers, is it possible 
> to do something like
> Create user mapping for foreignusers server …

You can certainly create a user mapping for a role.  But I think you'll
find that sally et al have to do "SET ROLE foreignusers" to use it.
It's difficult to see a way to avoid that --- after all, if sally is
also a member of foreignuserstoo that also has a mapping, which one
should the system use?

regards, tom lane




Re: Why is it that “aggregate functions are not allowed in FROM clause of their own query level”?

2020-05-13 Thread Tom Lane
Tom Ellis  writes:
> A. Am I right in thinking that the two forms are equivalent?

No.  In the first case the SUM() aggregate does not use any variables
belonging to the "LATERAL (SELECT ..." query level, therefore, per
SQL standard, it is not an aggregate of that query but an aggregate
of the next query level up, whose variable(s) it does use.  And
within the structure of *that* query level, it's in the wrong place.

> A1. And am I right to presume that it's always possibly to rewrite
> more complicated examples that yield the same error to valid versions,
> just by coming up with a local name for the problematic fields?

In the particular case here, the problem is to control which query level
the aggregate is deemed to belong to.  I think the issue is less about
"can I rewrite the query" and more about "do I actually understand the
semantics this is asking for", so I'd be hesitant to let a tool think
that it can rearrange things like this.

regards, tom lane




Re: ESQL/C: a ROLLBACK rolls back a COMMITED transaction

2020-05-13 Thread Adrian Klaver

On 5/12/20 10:34 PM, Matthias Apitz wrote:

El día Dienstag, Mai 12, 2020 a las 05:17:33 -0700, Adrian Klaver escribió:


insert into swd_auftrag ..

COMMIT?


This question (if it was a question) I don't understand.


 From your original message:

"The INSERT of 1 row into table swd_daten was OK and commit'ed (marked line)
and a later rollback (last line) seems to roll it back, at least the row
isn't in the table."

It was not clear to me whether:

"[1471] [12.05.2020 15:48:50:478]: ecpg_execute on line 1637: query: insert
into swd_auftrag ( setnr , aufnum , katkey , userid , seqcount ) values ( $1
, $2  , $3  , $4  , $5  ); with 5 parameter(s) on connection sisis
[1471] [12.05.2020 15:48:50:478]: ecpg_process_output on line 1637: OK:
INSERT 0 1"

also COMMITT(ed) or not?


As I said in the original post of this thread:

[1471] [12.05.2020 15:48:50:476]: ecpg_execute on line 1744: query: insert into 
swd_daten ( katkey , aktion , reserv , id , ansetzung , nettodaten ) values ( 
$1  , $2  , $3  , $4  , $5  , $6  ); with 6 parameter(s) on connection sisis
[1471] [12.05.2020 15:48:50:476]: ecpg_process_output on line 1744: OK: INSERT 
0 1
[1471] [12.05.2020 15:48:50:477]: ECPGtrans on line 6716: action "commit"; connection 
"sisis"
...

i.e. the COMMIT is done in the same connection(!) right after the
INSERT. There is no other ESQL/C call logged (and done) between.

There is only no ACK from the ESQL/C layer about the COMMIT of
ECPGtrans, but this is a fault in the ESQL/C layer code, because the
ECPGtrans is logged before executing it and afterward if it's done OK no
message is generated.



In your original post you had:

"We're facing in our ESQL/C written application a situation where a
commit'ed INSERT into a table is rolled back. I have here the ESQL/C
logging of the problem:"
...

"The INSERT of 1 row into table swd_daten was OK and commit'ed (marked line)
and a later rollback (last line) seems to roll it back, at least the row
isn't in the table.

Any ideas? The connection is not set to AUTOCOMMIT."

You then included a sequence of log messages that ended with a 
"rollback". Within that sequence was the INSERT to swd_auftrag. It 
seemed reasonable to ask whether that INSERT rolled back also. That is 
if the intent of this thread is to figure out why the INSERT was rolled 
back. If the thread has changed to fixing ESQL/C logging then ignore the 
above.




matthias





--
Adrian Klaver
adrian.kla...@aklaver.com




Practical usage of large objects.

2020-05-13 Thread Dmitry Igrishin
Hello all,

As you know, PostgreSQL has a large objects facility [1]. I'm curious
are there real systems which are use this feature? I'm asking because
and I'm in doubt should the Pgfe driver [2] provide the convenient API
for working with large objects or not.

Thanks!

[1] https://www.postgresql.org/docs/12/largeobjects.html
[2] https://github.com/dmitigr/pgfe




Re: Practical usage of large objects.

2020-05-13 Thread Tom Lane
Dmitry Igrishin  writes:
> As you know, PostgreSQL has a large objects facility [1]. I'm curious
> are there real systems which are use this feature?

We get questions about it regularly, so yeah people use it.

regards, tom lane




Re: ESQL/C: a ROLLBACK rolls back a COMMITED transaction

2020-05-13 Thread Matthias Apitz
El día Mittwoch, Mai 13, 2020 a las 08:15:40 -0700, Adrian Klaver escribió:

> In your original post you had:
> 
> "We're facing in our ESQL/C written application a situation where a
> commit'ed INSERT into a table is rolled back. I have here the ESQL/C
> logging of the problem:"
> ...
> 
> "The INSERT of 1 row into table swd_daten was OK and commit'ed (marked line)
> and a later rollback (last line) seems to roll it back, at least the row
> isn't in the table.
> 
> Any ideas? The connection is not set to AUTOCOMMIT."
> 
> You then included a sequence of log messages that ended with a "rollback".
> Within that sequence was the INSERT to swd_auftrag. It seemed reasonable to
> ask whether that INSERT rolled back also. That is if the intent of this
> thread is to figure out why the INSERT was rolled back. If the thread has
> changed to fixing ESQL/C logging then ignore the above.

The intention of my original post was to understand why the INSERT was
rolled back. I do know this now: because I overlooked that the cancel of
the transaction was done after the INSERT by CLOSE of a non open CURSOR.

We're fixing this now already by checking in pg_cursors if the CURSOR is
still open before issue the CLOSE. I don't know how expensive this is,
but it seems that there is no other option to check this.

The side step about fixing ESQL/C logging should be handled in another
thread.

Thanks all for your help

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub




Re: Reuse an existing slot with a new initdb

2020-05-13 Thread Laurenz Albe
On Wed, 2020-05-13 at 06:18 -0700, Support wrote:
> On 5/8/2020 1:51 PM, Support wrote:
> > I normalized my replislots with the name of my nodes.
> > I have 2 options in my recovery script that tries first pg_basebackup
> > to recover and sync the hot standby, but unfortunately big DB fails 
> > sometimes due
> > to very slow or unstable network. So my second option is to completely 
> > make a new inidb and import an sql file from pg_dumpall master as it 
> > takes less bytes once compressed. But I'm facing an issue with the 
> > slot complaining (obviously) about the ho standby node that does not 
> > match the slot identifier. So my question is simple, is there a way to 
> > reinitialize the slot to not renew the identifier with the new hot 
> > standby initdb?
> 
> No one has an answer to my question?

That may be because your question is hard to understand.

You cannot create a standby server using "pg_dumpall", so it is
unclear what exactly you are doing here.

Also, it is not clear what error message you get.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Reuse an existing slot with a new initdb

2020-05-13 Thread live-school support



Best Regards

LIVE-SCHOOL FOUNDATION
http://www.live-school.net
in...@live-school.net
On 5/13/2020 1:24 PM, Laurenz Albe wrote:

On Wed, 2020-05-13 at 06:18 -0700, Support wrote:

On 5/8/2020 1:51 PM, Support wrote:

I normalized my replislots with the name of my nodes.
I have 2 options in my recovery script that tries first pg_basebackup
to recover and sync the hot standby, but unfortunately big DB fails
sometimes due
to very slow or unstable network. So my second option is to completely
make a new inidb and import an sql file from pg_dumpall master as it
takes less bytes once compressed. But I'm facing an issue with the
slot complaining (obviously) about the ho standby node that does not
match the slot identifier. So my question is simple, is there a way to
reinitialize the slot to not renew the identifier with the new hot
standby initdb?

No one has an answer to my question?

That may be because your question is hard to understand.

You cannot create a standby server using "pg_dumpall", so it is
unclear what exactly you are doing here.

Also, it is not clear what error message you get.

Yours,
Laurenz Albe

Thanks Laurenz,

I didn't recal that it was not possible to create a hot standby with a 
fresh new install and pg_dumpall :(.
only pg_basebackup or an exact copy of the data folder can do it right? 
is the reason technical or else?





Re: Reuse an existing slot with a new initdb

2020-05-13 Thread Support




On 5/13/2020 1:24 PM, Laurenz Albe wrote:

On Wed, 2020-05-13 at 06:18 -0700, Support wrote:

On 5/8/2020 1:51 PM, Support wrote:

I normalized my replislots with the name of my nodes.
I have 2 options in my recovery script that tries first pg_basebackup
to recover and sync the hot standby, but unfortunately big DB fails
sometimes due
to very slow or unstable network. So my second option is to completely
make a new inidb and import an sql file from pg_dumpall master as it
takes less bytes once compressed. But I'm facing an issue with the
slot complaining (obviously) about the ho standby node that does not
match the slot identifier. So my question is simple, is there a way to
reinitialize the slot to not renew the identifier with the new hot
standby initdb?

No one has an answer to my question?

That may be because your question is hard to understand.

You cannot create a standby server using "pg_dumpall", so it is
unclear what exactly you are doing here.

Also, it is not clear what error message you get.

Yours,
Laurenz Albe

> it is not clear what error message you get.
Each has apparently an internal identifier  based on the hot standby 
initdb when it connected to the master the first time(?) or when a 
pg_basebackup occured previously
this identifier (unique bigint) obviously does not match if I connect 
the hot standby with a new initdb and a restore from pg_dumpall copy of 
the master.
Sad because everything seems to be running but the master just does not 
like the identifier doesn't match up. (sorry I cannot show you the 
original error since I run the db in prod now)






Re: Reuse an existing slot with a new initdb

2020-05-13 Thread Support



On 5/13/2020 1:24 PM, Laurenz Albe wrote:

On Wed, 2020-05-13 at 06:18 -0700, Support wrote:

On 5/8/2020 1:51 PM, Support wrote:

I normalized my replislots with the name of my nodes.
I have 2 options in my recovery script that tries first pg_basebackup
to recover and sync the hot standby, but unfortunately big DB fails
sometimes due
to very slow or unstable network. So my second option is to completely
make a new inidb and import an sql file from pg_dumpall master as it
takes less bytes once compressed. But I'm facing an issue with the
slot complaining (obviously) about the ho standby node that does not
match the slot identifier. So my question is simple, is there a way to
reinitialize the slot to not renew the identifier with the new hot
standby initdb?

No one has an answer to my question?

That may be because your question is hard to understand.

You cannot create a standby server using "pg_dumpall", so it is
unclear what exactly you are doing here.

Also, it is not clear what error message you get.

Yours,
Laurenz Albe
I didn't recal that it was not possible to create a hot standby with a 
fresh new install and pg_dumpall .
only pg_basebackup or an exact copy of the data folder can do it right? 
is the reason technical or else?


Each has apparently an internal identifier  based on the hot standby 
initdb when it connected to the master the first time(?) or when a 
pg_basebackup occured previously
this identifier (unique bigint) obviously does not match if I connect 
the hot standby with a new initdb and a restore from pg_dumpall copy of 
the master.
Sad because everything seems to be running but the master just does not 
like the identifier doesn't match up. (sorry I cannot show you the 
original error since I run the db in prod now)





Re: ESQL/C: a ROLLBACK rolls back a COMMITED transaction

2020-05-13 Thread Adrian Klaver

On 5/13/20 11:16 AM, Matthias Apitz wrote:

El día Mittwoch, Mai 13, 2020 a las 08:15:40 -0700, Adrian Klaver escribió:


In your original post you had:

"We're facing in our ESQL/C written application a situation where a
commit'ed INSERT into a table is rolled back. I have here the ESQL/C
logging of the problem:"
...

"The INSERT of 1 row into table swd_daten was OK and commit'ed (marked line)
and a later rollback (last line) seems to roll it back, at least the row
isn't in the table.

Any ideas? The connection is not set to AUTOCOMMIT."

You then included a sequence of log messages that ended with a "rollback".
Within that sequence was the INSERT to swd_auftrag. It seemed reasonable to
ask whether that INSERT rolled back also. That is if the intent of this
thread is to figure out why the INSERT was rolled back. If the thread has
changed to fixing ESQL/C logging then ignore the above.


The intention of my original post was to understand why the INSERT was
rolled back. I do know this now: because I overlooked that the cancel of
the transaction was done after the INSERT by CLOSE of a non open CURSOR.


Huh, this message:

https://www.postgresql.org/message-id/20200513101301.GC26063%40sh4-5.1blu.de

got delayed in the ether somewhere. It showed up recently, so now I see 
the issue.




We're fixing this now already by checking in pg_cursors if the CURSOR is
still open before issue the CLOSE. I don't know how expensive this is,
but it seems that there is no other option to check this.

The side step about fixing ESQL/C logging should be handled in another
thread.

Thanks all for your help

matthias




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Practical usage of large objects.

2020-05-13 Thread Ron
Our databases use bytea instead.  (I don't know why the application vendor 
decided on that.)


On 5/13/20 12:53 PM, Dmitry Igrishin wrote:

Hello all,

As you know, PostgreSQL has a large objects facility [1]. I'm curious
are there real systems which are use this feature? I'm asking because
and I'm in doubt should the Pgfe driver [2] provide the convenient API
for working with large objects or not.

Thanks!

[1] https://www.postgresql.org/docs/12/largeobjects.html
[2] https://github.com/dmitigr/pgfe




--
Angular momentum makes the world go 'round.




Re: Reuse an existing slot with a new initdb

2020-05-13 Thread Michael Paquier
On Wed, May 13, 2020 at 02:12:45PM -0700, live-school support wrote:
> I didn't recal that it was not possible to create a hot standby with a fresh
> new install and pg_dumpall :(.
>
> only pg_basebackup or an exact copy of the data folder can do it right? is
> the reason technical or else?

When using physical replication, both the primary and standby need to
have the same system ID, and both instances need to share the same
architectures to work properly as data is physically replayed from one
cluster to the other using WAL, which includes for example copies of
on disk relation 8kB pages (ever heard of full_page_writes?).

Like Laurenz, I am not sure what is your problem, what are the errors
PostgreSQL are producing and what you expect from Postgres.  If you
could describe clearly step-by-step what you are doing and what you
expect the result would be based on your configuration, we may be able
to help, but it is not really possible to help out without more
details.  For example, the first sentence of your first email mentions
the use of replication slots.  You may want to explain better where
the slots are used, how they get either dropped and/or recreated, etc.
-
Michael


signature.asc
Description: PGP signature


Re: Practical usage of large objects.

2020-05-13 Thread Michael Paquier
On Wed, May 13, 2020 at 01:55:48PM -0400, Tom Lane wrote:
> Dmitry Igrishin  writes:
>> As you know, PostgreSQL has a large objects facility [1]. I'm curious
>> are there real systems which are use this feature?
> 
> We get questions about it regularly, so yeah people use it.

I recall that some applications where I work make use of it for some
rather large log-like data.  At the end of the day, it really boils
down to if you wish to store blobs of data which are larger than 1GB,
the limit for toasted fields, as LOs can be up to 4TB.  Also, updating
or reading a LO can be much cheaper than a toasted field, as the
latter would update/read the value as a whole.
--
Michael


signature.asc
Description: PGP signature


Reuse an existing slot with a new initdb

2020-05-13 Thread David G. Johnston
On Wednesday, May 13, 2020, Michael Paquier  wrote:

> On Wed, May 13, 2020 at 02:12:45PM -0700, live-school support wrote:
> > I didn't recal that it was not possible to create a hot standby with a
> fresh
> > new install and pg_dumpall :(.
> >
> > only pg_basebackup or an exact copy of the data folder can do it right?
> is
> > the reason technical or else?
>
> When using physical replication, both the primary and standby need to
> have the same system ID, and both instances need to share the same
> architectures to work properly as data is physically replayed from one
> cluster to the other using WAL, which includes for example copies of
> on disk relation 8kB pages (ever heard of full_page_writes?).
>

This basically hits the nail on the head.  My reading is that the OP has
two abstractly identical restored databases, one created from a physical
copy and the other from a logical copy.  The issue is why the original
server cannot use the same replication slot name to continue synchronizing
with the logically restored one but is able to continue with the physically
restored one.  The above is why.

The OP asks whether the technical identifier error encountered can be
overcome.  It cannot but even if it could the attempt would still end up
failed due to fundamental differences in the physical data layouts between
physical and logical restoration.  If the OP needs to rebuild a physical
replication hot standby database they must use a physical backup of the
original database as a starting point.  To use a logically restored
database target would require logical replication.

David J.