Re: PostgreSQL replication lag - Suggestions and questions

2021-06-10 Thread Vijaykumar Jain
> My current architecture is: master (AZ1) --> read slave (AZ2) -->
hot-standby (AZ2) - They are all using streaming replication.

I am not sure of the difference between read replica and a hot standby. At
least with later versions hot standby allows read queries.
I mean unless you do not want queries to go to AZ2 node, both of them
should be able to server the queries.
with libpq 10 onwards, you can provide multiple nodes in the connection
string, which would "automatically" figure out next available node in to
query if one is not responding.
See
PostgreSQL: Documentation: 13: 33.1. Database Connection Control Functions



> All read-only queries are sent to the read slave. Sometimes the
replication lag between the master and the slaves reaches up to 10
minutes.I understand that the lag is expected in any replication scenario,
and below you can find some suggestions that I think would help to minimize
the lag time.

lags can be a result of multiple reasons like,
Intermittent connection issues,
network unable to keep up with WAL changes. A typical example would be any
action that performs a table rewrite of a huge table.
this can result in query getting cancelled on the replica and it may think
you might query stale data.
you can tune the replica with these params if you want to avoid query
cancellations but are ok with little stale data.
also if the application can initiate a retry for the cancelled query
exception if they can handle at app layer itself.
RUNTIME-CONFIG-REPLICATION-STANDBY


If you cannot deal with stale data at all, then you have two options.
1) synchronous replication
synchronous replication in very simple terms would mean, if you have a
cluster like
Primary -> ( Replica R1, Replica R2)  the primary would wait for
acknowledgement from at least one of the replicas. this may impact your
tps, but you get consistency.
but geo setups with synchronous replication would be problematic.
2) read from the primary alone.
this is always the safest option, as no intermediate network, but this
would also mean all the read load on the primary will impact the db
performance and the housekeeping work like
vacuuming. and incase the server goes down, till the time you failover, you
cannot perform any read or writes depending on the catch up time.
SYNCHRONOUS-REPLICATION



> Having the read slave in the same AZ as its master - for better network
throughput;
AZ spread is meant for dealing with availability. It may be ok to have some
read replica in some AZ, but also have some in another AZ in case of a
major outage.
if you can use a load balancer, you can provide more weight to local
replicas then the remote replicas etc and use a health check like
replication lag to ensure read request
are served by nodes with tolerable lag only.
scaling-postgresql-using-connection-poolers-and-load-balancers-for-an-enterprise-grade-environment/


> Having the latest PostgreSQL version to get its best performance
mostly always. not just improvements, but a lot of new features like
partitioning, logical replication, stored procedures etc are now available
in newer versions.
PostgreSQL: Release Notes 

> Having the latest Operational System behind PostgreSQL to get its best IO
performance
yes at the software layer, latest versions would be more performant
"mostly".
but still the hardware layer plays an important role.
If you want to really validate improvements, you can run pgbench/sysbench
stress tests on the interested versions and see the changes yourself.

upgrade is always needed. not just for performance reasons, but security
reasons as well.
Also,  keeping the upgrade on hold for a time, may mean jumping across
multiple versions which although might work,
but you would need to read the change log across the versions to ensure
nothing would be broken,
Hence regular upgrades avoid the pain.


-- 
Thanks,
Vijay
Mumbai, India


order by

2021-06-10 Thread Luca Ferrari
Hi all,
this could be trivial, but I found as an accident the following:

pgbench=> create table t as select v from generate_series( 1, 2 ) v;
SELECT 2
pgbench=> select * from t order by foo;
ERROR:  column "foo" does not exist
LINE 1: select * from t order by foo;
 ^
pgbench=> select * from t order by t;
 v
---
 1
 2
(2 rows)

The ORDER BY rejects non existent columns (right) but accepts the
table itself as an ordering expression.
Reading here 
I cannot really understand why it is working and which kind of
ordering it is applying at all, I suspect the same ordering as without
ORDER BY at all.

Any hint?

Just for the record, seems that supplying another table name is
rejected too, as I would expect:

pgbench=> select * from t order by pgbench_accounts;
ERROR:  column "pgbench_accounts" does not exist
LINE 1: select * from t order by pgbench_accounts;


Thanks,
Luca




Re: order by

2021-06-10 Thread Vijaykumar Jain
> Any hint?

you can run an explain analyze to check what is going on,
when you provide a table in query in the order by clause, it is
ordered by cols of that table in that order.

create table t(id int, value int);

postgres=# explain (analyze,verbose) select * from t order by t;
 QUERY PLAN
-
 Sort  (cost=158.51..164.16 rows=2260 width=40) (actual
time=0.005..0.006 rows=0 loops=1)
   Output: id, value, t.*
   Sort Key: t.*
   Sort Method: quicksort  Memory: 25kB
   ->  Seq Scan on public.t  (cost=0.00..32.60 rows=2260 width=40)
(actual time=0.002..0.002 rows=0 loops=1)
 Output: id, value, t.*
 Planning Time: 0.033 ms
 Execution Time: 0.044 ms
(8 rows)

postgres=# truncate table t;
TRUNCATE TABLE
postgres=# insert into t values (100, 1);
INSERT 0 1
postgres=# insert into t values (50, 2);
INSERT 0 1
postgres=# select * from t order by t;
 id  | value
-+---
  50 | 2
 100 | 1
(2 rows)

postgres=# select * from t order by t.id, t.value;
 id  | value
-+---
  50 | 2
 100 | 1
(2 rows)




Re: order by

2021-06-10 Thread Luca Ferrari
On Thu, Jun 10, 2021 at 10:15 AM Vijaykumar Jain
 wrote:
> you can run an explain analyze to check what is going on,
> when you provide a table in query in the order by clause, it is
> ordered by cols of that table in that order.

Clever, thanks!
I also realized that this "table to tuples" expansion works for GROUP BY too.
However, I'm not able to find this documented in GROUP BY, WHERE,
ORDER BY clauses sections into select documentation
https://www.postgresql.org/docs/12/sql-select.html>. Could be
my fault, of course.

Luca




Re: order by

2021-06-10 Thread Thomas Munro
On Thu, Jun 10, 2021 at 8:40 PM Luca Ferrari  wrote:
> On Thu, Jun 10, 2021 at 10:15 AM Vijaykumar Jain
> > when you provide a table in query in the order by clause, it is
> > ordered by cols of that table in that order.
>
> Clever, thanks!
> I also realized that this "table to tuples" expansion works for GROUP BY too.
> However, I'm not able to find this documented in GROUP BY, WHERE,
> ORDER BY clauses sections into select documentation
> https://www.postgresql.org/docs/12/sql-select.html>. Could be
> my fault, of course.

There's something about this here:

https://www.postgresql.org/docs/13/rowtypes.html#ROWTYPES-USAGE




Re: order by

2021-06-10 Thread Laurenz Albe
On Thu, 2021-06-10 at 10:39 +0200, Luca Ferrari wrote:
> I also realized that this "table to tuples" expansion works for GROUP BY too.
> However, I'm not able to find this documented in GROUP BY, WHERE,
> ORDER BY clauses sections into select documentation
> https://www.postgresql.org/docs/12/sql-select.html>;. Could be
> my fault, of course.

I don't think it is really documented outside the source, but it is
called a "whole-row reference" and behaves in SQL like composite
data type that belongs to the table.

So ordering is lexicographical, equality is equality of all members,
and don't ask me about IS NULL and IS NOT NULL, else I point you to
https://www.postgresql.org/message-id/flat/48BDABE9-88AB-46E9-BABE-F70DDBFB98BD%40kineticode.com

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Questions about support function and abbreviate.

2021-06-10 Thread Han Wang
Hi all,

I am trying to implement a sort support function for geometry data types in
PostGIS with the new feature `SortSupport`. However, I have a question
about this.

I think it is hardly to apply a sort support function to a complex data
type without the `abbrev_converter` to simply the data structure into a
single `Datum`. However, I do not know how the system determines when to
apply the converter.

I appreciate any answers or suggestions. I am looking forward to hearing
from you.

Best regards,
Han


Re: index unique

2021-06-10 Thread Marc Millas
Thanks Thomas,

but, as stated after the first post, the need was for a PK as asked by
postgres (ie. for tech needs, not for functionnal needs)
up to now, looks like we must create a PK (and so, the associated index)
just to answer logical replication needs.(and qgis which also needs a PK)
that index  (some kind of hash on the geom column + the other 2).  have no
meaning on a functionnal point of view, and there are chances that it will
never be used by postgres  in normal use..
as the hash will not help on any topological request...

that was the reason of the first mail: as we must create a PK, is there any
way to make something useful and not this unuseful "thing" ?




Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com



On Tue, Jun 8, 2021 at 10:51 PM Thomas Kellerer  wrote:

> Marc Millas schrieb am 03.06.2021 um 22:51:
> > on a table we need a primary key and to get a unique combinaison, we
> need 3 columns of that table:
> > 1 of type integer,
> > 1 of type text,
> > 1 of type geometry
> >
> > creating the PK constraint doesn work: (even with our current small data
> set)
> > ERROR:  index row size 6072 exceeds btree version 4 maximum 2704 for
> index "xxx_spkey"
> > DETAIL:  Index row references tuple (32,1) in relation "xxx".
> > HINT:  Values larger than 1/3 of a buffer page cannot be indexed.
> > Consider a function index of an MD5 hash of the value, or use full text
> indexing.
> >
> > ok. we can do this.
> > but if so, we need to create a gist index on the geometry column to do
> any topology request.
> > so 2 indexes containing this single column.
> >
> > if we install extension btree_gist, no pb to create an index on all 3
> columns.
> > but as gist does not support unicity, this index cannot be used for the
> PK.
> >
> > OK, we may try to use a function to get the bounding box around the
> geometry objects and use the result into a btree index
> >
> > Any idea (I mean: another idea !) to tackle this ?
> > Or any critic on the "solution" ??
>
> How do you define the "uniqueness" of the geometry?
>
> GIST can support "uniqueness" through exclusion constraints.
> It's not a primary key, so you can't create foreign keys referencing that
> table,
> but it does ensure uniqueness (In fact the "normal" unique indexes are
> essentially a special case of exclusion constraints)
>
> create index on the_table using gist (int_column with =, text_col with
> =, geometry_col with &&);
>
> Replace the && operator with whatever is appropriate for your use case.
>
> Thomas
>
>
>


SELECT in VCHAR column for strings with TAB

2021-06-10 Thread Matthias Apitz


Hello,

I want to search in a VCHAR column for a string with two TAB at the end.
I tried some things w/o any luck, like:

select * from acq_vardata where name=concat('Test202112', 9, 9);
select * from acq_vardata where name=concat('Test202112', '\t\t');

Any ideas? Thx

matthias
-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
¡Con Cuba no te metas!  «»  Don't mess with Cuba!  «»  Leg Dich nicht mit Kuba 
an!
http://www.cubadebate.cu/noticias/2020/12/25/en-video-con-cuba-no-te-metas/




Re: order by

2021-06-10 Thread Tom Lane
Luca Ferrari  writes:
> The ORDER BY rejects non existent columns (right) but accepts the
> table itself as an ordering expression.

As others have noted, this is basically taking the table name as a
whole-row variable, and then sorting per the rules for composite
types.  I write to point out that you can often get some insight into
what the parser thought it was doing by examining the reverse-listing
for the query.  The simplest way to do that is to create a view and
examine the view:

regression=# create view v as  
regression-# select * from t order by t;
CREATE VIEW
regression=# \d+ v
  View "public.v"
 Column |  Type   | Collation | Nullable | Default | Storage | Description 
+-+---+--+-+-+-
 v  | integer |   |  | | plain   | 
View definition:
 SELECT t.v
   FROM t
  ORDER BY t.*;

The form "t.*" is a more explicit way to write a whole-row variable.

(IIRC, accepting it without "*" is a PostQUEL-ism that we've never
got rid of.  I think that with "*", there's at least some support
for the concept in the SQL standard.  But I'm insufficiently
caffeinated to want to go digging for that.)

regards, tom lane




Re: SELECT in VCHAR column for strings with TAB

2021-06-10 Thread Ray O'Donnell

On 10/06/2021 14:30, Matthias Apitz wrote:


Hello,

I want to search in a VCHAR column for a string with two TAB at the end.
I tried some things w/o any luck, like:

select * from acq_vardata where name=concat('Test202112', 9, 9);
select * from acq_vardata where name=concat('Test202112', '\t\t');


Maybe use a regular expression?

https://www.postgresql.org/docs/13/functions-matching.html


Ray.

--
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie




Re: SELECT in VCHAR column for strings with TAB

2021-06-10 Thread Tom Lane
Matthias Apitz  writes:
> I want to search in a VCHAR column for a string with two TAB at the end.
> I tried some things w/o any luck, like:

> select * from acq_vardata where name=concat('Test202112', 9, 9);
> select * from acq_vardata where name=concat('Test202112', '\t\t');

By default, backslash is not magic in SQL literals.  The right way
to spell that is something like

select * from acq_vardata where name = E'Test202112\t\t';

See the discussion of "escape strings" in

https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS

regards, tom lane




Re: Logical Replication: SELECT pg_catalog.set_config Statement appears to be hanging

2021-06-10 Thread Hannes Kühtreiber


Hello Tom,

thanks for the clarification. We are stumbling about this in psql (11.11)

I have now searched the release logs to find out when this will be resolved.

I found the following matching entry:

https://www.postgresql.org/docs/14/release-14.html  

E.1.3.1.7. System Views
.
Improve pg_stat_activity reporting of walsender processes (Tom Lane)
Previously only SQL commands were reported.

so we have to wait for psql14, or is there something else to this effect 
in an earlier release, that I failed to find?


regards

Hannes



Am 18.05.2021 um 15:15 schrieb Tom Lane:

=?UTF-8?Q?Hannes_K=c3=bchtreiber?=  writes:

Hello Tom, thanks for your answer!
We found out because we are monitoring long running queries, and saw it
had been running for a month before the restart yesterday.
I just queried pg_stat_activity and it seems to be running since then.

Oh, that's because pg_stat_activity continues to show the last plain-SQL
query executed by the session.

I think we recently changed things so that replication commands
would be shown in pg_stat_activity too, but evidently you're running
a version older than that.

regards, tom lane










Re: Logical Replication: SELECT pg_catalog.set_config Statement

2021-06-10 Thread Hannes Kühtreiber

Hello Jeremy,

thanks for your input (and sorry for the delay).

for our monitoring we query like this

SELECT EXTRACT(epoch FROM (LOCALTIMESTAMP - 
pg_stat_activity.query_start))::integer AS age

FROM pg_stat_activity
WHERE pg_stat_activity.state = 'active' AND query NOT LIKE 'autovacuum:%'
ORDER BY pg_stat_activity.query_start ASC
LIMIT 1

but we stumble over the query nontheless, its state being active

+-+-+
| Zustand | Laufende Abfrage    |
+-+-+
| active  | SELECT pg_catalog.set_config('search_path', '', false); |
+-+-+


is there another good way to exclude it?

regards

Hannes


Am 18.05.2021 um 17:52 schrieb Jeremy Smith:





We found out because we are monitoring long running queries, and
saw it had been running for a month before the restart yesterday.
I just queried pg_stat_activity and it seems to be running since
then.

taimusz=# SELECT pid, query_start, usename, left(query,70)
FROM pg_stat_activity
WHERE query != '' AND query NOT ILIKE '%pg_stat_activity%'
ORDER BY query_start;
   pid   |  query_start  |  usename
|  left

-+---++
 2321161 | 2021-05-17 16:15:13.906679+02 | subscriber | SELECT
pg_catalog.set_config('search_path', '', false);



You should add: AND state != 'idle' to filter out queries that are no 
longer running and don't have an open transaction.  Your query is 
finding long running sessions, not necessarily long running queries.

--









Re: bottom / top posting

2021-06-10 Thread Basques, Bob (CI-StPaul)
All,

First, I AM an old-timer, and prefer the top posting for a number of reasons.  
I’ve tried the Digest versions of lists in the past and they didn’t quite work 
out (for me).

Top post, newest on top, older stuff indented, can be clipped wherever, 
although I prefer to see everything if I need to, below. Why should I need to 
scroll down to see the newest stuff, especially in long threads. Top posting is 
very easy to understand what’s what, always see the latest stuff first, no 
scrolling to get going, if I want to see history, I scroll down, I prefer to 
see the whole thread at once, even the really long ones, for the context.  I 
tend to search for old stuff in the archives through Nabble which works great 
(for me).

Alternatively, where I deem appropriate I will do inline posting/clipping, but 
I always announce that at the top (posting) of the response.

My tarnished two cents worth . . .  :c)

Bobb

P.S.  BTW, I intentionally don’t use a big SIG in my Email because of my list 
participation.  I have no control over my companies addition of those “Think 
Before You Click . . .” placards either.  Only alternative there is to not use 
my Company account, which seems wrong to me for some reason  . . .  I could 
probably figure out some way to address those placards automatically, but 
haven’t tried as of yet.  I wonder if I could do it with a rule/scripting . . . 
.


From: Vijaykumar Jain 
Date: Monday, June 7, 2021 at 7:06 AM
To: pgsql-general 
Subject: bottom / top posting

Think Before You Click: This email originated outside our organization.

I hear a lot of suggestions wrt bottom / top posting.

only when i read this,
PostgreSQL - general - Top posting | Threaded View 
(postgresql-archive.org)

I got a feeling it sounded rude to the top post, despite me not even having an 
iota of intention to think that way. I was mostly used to tickets or chat.
but as with chats, using CAPS was not known to me as shouting for a long time, 
when someone pointed out to me this is what others feel when they read.

Will keep this in mind, for me as well as others who are not aware of the same.

--
Thanks,
Vijay
Mumbai, India


Re: Logical Replication: SELECT pg_catalog.set_config Statement appears to be hanging

2021-06-10 Thread David G. Johnston
On Thu, Jun 10, 2021, 09:06 Hannes Kühtreiber 
wrote:

> so we have to wait for psql14, or is there something else to this effect
> in an earlier release, that I failed to find?
>

By definition every feature in a vX.0 release note is new as of that
release.  The sentence at the top of that section states this explicitly
too.

David J.

>


Implicit table removal from logical replication publication

2021-06-10 Thread Avi Weinberg
Hi Experts

I had a case where a  team member deleted and recreated an empty table which 
participated in logical replication.  After that action the table no longer was 
part of the replication and new inserts were not copied to the subscribers.

How can I check existing publication for the list of participating tables so I 
will know (from the publication side) that a table was removed?

It is possible for table to be removed from publication using the following 
steps:

create a publication with 2 tables
create a subscription to that publication
verify you have two lines for the following query on subscriber side "select * 
from pg_subscription_rel"
delete one of the tables from the publisher side
run alter subscription the_name refresh publication
run select * from pg_subscription_rel and see that only one row remain
even if you now add the deleted table on the publisher side, it will no longer 
participate in the publication until you add it explicitly again using "alter 
publication add table"

How can I know that the table was removed from publication so I will know to 
add it?

Thanks!



IMPORTANT - This email and any attachments is intended for the above named 
addressee(s), and may contain information which is confidential or privileged. 
If you are not the intended recipient, please inform the sender immediately and 
delete this email: you should not copy or use this e-mail for any purpose nor 
disclose its contents to any person.


Re: How to pass a parameter in a query to postgreSQL 12

2021-06-10 Thread Peter J. Holzer
On 2021-06-09 14:51:46 -0500, Hassan Camacho Cadre wrote:
> I recently installed a postgreSQL v12, in previous version 8.3 in all my
> queries I pass parameters using the character :
> 
> SELECT
> 
>   public.tabla.id
> 
> FROM 
> 
>   public.tabla
> 
> WHERE  
> 
>   public.tabla.id = :a

That looks Oracle-ish to me.


> In the new version when I try to make this query it sends me an error
> 
> ERROR syntax error at or near ":"
> 
>  
> 
> I tried
> 
>  
> 
> ?a , ¿a, @a
> 
>  
> 
> But I always got an error
> 
> 
> I am executing this query on the query editor of pgadmin 4

I don't use pgAdmin4, and don't know how (and if) it handles
parameters.

Natively, PostgreSQL uses $1, $2, ... for parameters in prepared queries.

AFAIK pgAdmin4 is written in Python, so it is likely that it supports
the parameter binding of the Python PostgreSQL library in use (most
likely psycopg2). That would be %s for unnamed parameters and %(name)s
for named parameters.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Implicit table removal from logical replication publication

2021-06-10 Thread Vijaykumar Jain
Wow, the drop table silently removes entry from publication without any
logs.

I could not find any monitoring view to help me figure out if the
publication is broken due to ddl change.
pg_stat_replication on publisher, and pg_stat_subscription on
subscriber only help with lsn based lag.
unless this is not an issue but a feature ?
i'll check more on the better part of monitoring logical replication stuff.

but for your case,
you can set up an event trigger that would avoid dropping the table.
basically any drop of a table or any ddl that would break publication.

functions-event-triggers

event-trigger-definition

how-use-event-triggers-postgresql


you can have a custom query filter that would prevent dropping of objects
part of publication accidentally.

and then you want to exclusively drop the table, once not part of
publication, you have to first remove the table from publication and then
drop.

I have not run this in production, so I believe others may chime in, but
logical replication issues from logs are not the best.
I am happy to be corrected.
I'll update on more scenarios.
 more logical_replication_example.sh
initdb -D 12/rep1 >/dev/null 2>&1
initdb -D 12/rep2 >/dev/null 2>&1

printf "port = 6111 \nwal_level = logical\n log_statement='all'\n" >> 
12/rep1/postgresql.auto.conf
printf "port = 7111 \nwal_level = logical\n log_statement='all'\n" >> 
12/rep2/postgresql.auto.conf

pg_ctl -l rep1.log -D 12/rep1 start
pg_ctl -l rep2.log -D 12/rep2 start

psql -p 6111 -c "create database source_rep;"

cat > source_rep.sql < target_rep.sql <

Re: Implicit table removal from logical replication publication

2021-06-10 Thread Cory Nemelka
On Thu, Jun 10, 2021 at 12:39 PM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

> Wow, the drop table silently removes entry from publication without any
> logs.
>
> I could not find any monitoring view to help me figure out if the
> publication is broken due to ddl change.
> pg_stat_replication on publisher, and pg_stat_subscription on
> subscriber only help with lsn based lag.
> unless this is not an issue but a feature ?
> i'll check more on the better part of monitoring logical replication stuff.
>
> but for your case,
> you can set up an event trigger that would avoid dropping the table.
> basically any drop of a table or any ddl that would break publication.
>
> functions-event-triggers
> 
> event-trigger-definition
> 
> how-use-event-triggers-postgresql
> 
>
> you can have a custom query filter that would prevent dropping of objects
> part of publication accidentally.
>
> and then you want to exclusively drop the table, once not part of
> publication, you have to first remove the table from publication and then
> drop.
>
> I have not run this in production, so I believe others may chime in, but
> logical replication issues from logs are not the best.
> I am happy to be corrected.
> I'll update on more scenarios.
>
> is pg_publication_tables what you are looking for?

>
>
> --
--cnemelka


Re: Implicit table removal from logical replication publication

2021-06-10 Thread Vijaykumar Jain
pg_subscription_rel
pg_publication_rel

have the relation part of the publication concerned.
OP has an issue to figure out if publication has a list of tables not in
sync in subscription and has subscription broken.
there may be ways to query tables on subscriber dbs via dblink or fdw,
but there is no view of metric for individual relation level subscription
to help figure out if it is broken.

I mean like for replication slots, we have an active flag to easily figure
out issues with replication, but not with pub/sub.
I only used logical replication for upgrades, so not have little idea wrt
ddl changes issues.


Re: bottom / top posting

2021-06-10 Thread Peter J. Holzer
On 2021-06-09 14:41:47 -0700, Dean Gibson (DB Administrator) wrote:
> 
> On Mon, Jun  7, 2021 at 07:53:30PM +0200, Francisco Olarte wrote:
> 
> ... properly scanning a top posted one takes much longer.
> 
> 
> Not here.
> 
> 
> I find top-posting moderately offensive, like saying "I am not going 
> to waste time to make your reading experience better".
> 
> 
> Not here either.
> 
> Top-posting has been the predominantly common practice in the business &
> government world for decades, & it is easy to adapt to.  Just like HTML eMail
> (within reason) & more than 80 columns on a line.  Somehow, millions of
> ordinary people are able to adapt to this,

They have probably never known anything else, so they didn't "adapt" to
it.

> I suppose it comes from the practice in those environs when paper memos were
> the norm, & if you needed to attach the contents of other paper memos to your
> own for context, you stapled them to the BACK of your own.

Yes, that's also my theory. That made a lot of sense with paper,
especially before photocopiers were invented, and you simply added your
memo to the file and then passed the whole file to the next person.

Makes much less sense in an electronic medium where you can copy and
delete without scissors and glue.

> Of course, wherever (top/bottom) one posts, trimming is important, but it's 
> far
> less important with top-posting.  You usually don't have to scroll down to get
> the immediate context, & if you do, you have less far to scroll. 

I know at least one person who insists that you never trim anything,
because he wants to keep only the last mail in each thread, so of course
that mail must contain everything. Of course that breaks down as soon as
somebody replies to some other mail than the most recent one. Don't know
how he deals with that.

> I wonder about the tolerance of the world we live in.  Somehow, I can deal 
> with
> top-posting, bottom-posting, middle-posting, HTML eMail, straight-text eMails,
> 80-column eMails, variable-width eMails, occasional ALL CAPS eMails, & stupid
> multi-line signatures, all without getting my tail in a knot over it.

I can deal with it. Especially when I'm paid for it. I think it's a
waste of my time (but if somebody else pays ...) and it is inefficient,
as it is very easy to overlook relevant details in that ever-growing
mess. I never understood why so many people hated e-mail as a
communication medium. Now I do.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: bottom / top posting

2021-06-10 Thread Peter J. Holzer
On 2021-06-10 01:23:34 -0400, Steve Litt wrote:
> Dean Gibson (DB Administrator) said on Wed, 9 Jun 2021 14:41:47 -0700
> >Top-posting has been the predominantly common practice in the business
> >& government world for decades, & it is easy to adapt to.  
> 
> OF COURSE! In business you need the CYA of having the entire discussion
> archived,

You can archive more than one e-mail per thread, you know :-)

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: index unique

2021-06-10 Thread Alban Hertroys


> On 8 Jun 2021, at 22:50, Thomas Kellerer  wrote:
> 
> Marc Millas schrieb am 03.06.2021 um 22:51:
>> on a table we need a primary key and to get a unique combinaison, we need 3 
>> columns of that table:
>> 1 of type integer,
>> 1 of type text,
>> 1 of type geometry
>> 
> 
> How do you define the "uniqueness" of the geometry?

That is actually the big question here. Multiple “unique” geometries can 
specify the same geometry!

A geom as simple as a line from (0,0) - (1,0) can just as easily be specified 
as (1,0) - (0,0). That’s the simplest case, and one could argue that the point 
of origin is different, but the next example would be a triangle starting at 
the same origin but traversed in different directions. It gets harder the more 
vertices a polygon has.

I would argue that a geometry type is ill-suited as a primary key column 
candidate.

Now, of course, the OP could have a case where their geometries are guaranteed 
to be unique regardless, but they’d better make sure before adding them to the 
PK.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.





Re: bottom / top posting

2021-06-10 Thread Jan Wieck

On 6/10/21 12:08 PM, Basques, Bob (CI-StPaul) wrote:

Alternatively, where I deem appropriate I will do inline 
posting/clipping, but I always announce that at the top (posting) of the 
response.


"where you deem appropriate" you will do that, and where "I deem 
appropriate" I will stop reading whatever you thought was important.


Want to get a message to me? Think less about what you deem appropriate.


Best Regards, Jan

--
Jan Wieck
Postgres User since 1994




Even more OT: Ditto machines [was: bottom / top posting]

2021-06-10 Thread Dean Gibson (DB Administrator)

On 2021-06-10 13:21, Peter J. Holzer wrote:

On 2021-06-09 14:41:47 -0700, Dean Gibson (DB Administrator) wrote:
... when paper memos were the norm ...
... before photocopiers were invented...



Tom mentioned "old-timers."  Remember "Ditto" machines?  Remember the 
odor?  They were in common use when I was in high school (1960).


https://en.wikipedia.org/wiki/Mimeograph






Re: Even more OT: Ditto machines [was: bottom / top posting]

2021-06-10 Thread Tom Browder
On Thu, Jun 10, 2021 at 15:52 Dean Gibson (DB Administrator) <
postgre...@mailpen.com> wrote:

> On 2021-06-10 13:21, Peter J. Holzer wrote:
>
> On 2021-06-09 14:41:47 -0700, Dean Gibson (DB Administrator) wrote:
>
> ... when paper memos were the norm ...
>
> ... before photocopiers were invented...
>
> Tom mentioned "old-timers."  Remember "Ditto" machines?  Remember the
> odor?  They were in common use when I was in high school (1960).
>

Indeed I "ditto" remember them! And the smell was the prize for running an
errand for the teacher to pick up copies from the Mimeograph room.

-Tom


Re: Even more OT: Ditto machines [was: bottom / top posting]

2021-06-10 Thread Rob Sargent


Indeed I "ditto" remember them! And the smell was the prize for 
running an errand for the teacher to pick up copies from the 
Mimeograph room.


-Tom
Hand-cranked Gestetner, anyone?  Blotchy ink, indelible mess but made 
one feel a brother of Gutenberg.





Adding table partition slow when there is default partition with data (primary key not used to check partition condition)

2021-06-10 Thread Sasa Vilic
Hi all,

I am doing following:

CREATE TABLE "change" (
  transaction_id uuid NOT NULL,
  id int4 NOT NULL,
  change_type varchar NOT NULL,
  object_type varchar NOT NULL,
  object_content jsonb NOT NULL,
  category_id uuid NOT NULL,
  CONSTRAINT change_pkey PRIMARY KEY (transaction_id, id)
) partition by list(transaction_id);

create table change_default
partition of "change" default;

insert into "change"
select * from old_change; -- 17437300 rows

create table change_bf6840c7_3e7b_4100_b0e4_f5844fb7635d
partition of "change"
for values in ('bf6840c7-3e7b-4100-b0e4-f5844fb7635d');

Adding this last partition takes around 19 seconds. I understand that
postgres has to check that 'bf6840c7-3e7b-4100-b0e4-f5844fb7635d' is not
present in the default partition, but it can use the primary key for that
and it shouldn't take that long, right? This new table is currently not
being used, so it can't be table lock, so the only reasonable conclusion is
that it does full table scan. Can that be optimized to just do primary key
lookup?

Server:
PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-36), 64-bit
8 cores, 32 GiB RAM, 1TB SSD

Thanks in advance!
Sasa


Re: bottom / top posting

2021-06-10 Thread Steve Litt
Because otherwise it's hard to figure out what some top posters are
talking about.


Basques, Bob (CI-StPaul) said on Thu, 10 Jun 2021 16:08:14 +

>All,
>
>First, I AM an old-timer, and prefer the top posting for a number of
>reasons.  I’ve tried the Digest versions of lists in the past and they
>didn’t quite work out (for me).
>
>Top post, newest on top, older stuff indented, can be clipped
>wherever, although I prefer to see everything if I need to, below. Why
>should I need to scroll down to see the newest stuff, especially in
>long threads. Top posting is very easy to understand what’s what,
>always see the latest stuff first, no scrolling to get going, if I
>want to see history, I scroll down, I prefer to see the whole thread
>at once, even the really long ones, for the context.  I tend to search
>for old stuff in the archives through Nabble which works great (for
>me).
>
>Alternatively, where I deem appropriate I will do inline
>posting/clipping, but I always announce that at the top (posting) of
>the response.
>
>My tarnished two cents worth . . .  :c)
>
>Bobb
>
>P.S.  BTW, I intentionally don’t use a big SIG in my Email because of
>my list participation.  I have no control over my companies addition
>of those “Think Before You Click . . .” placards either.  Only
>alternative there is to not use my Company account, which seems wrong
>to me for some reason  . . .  I could probably figure out some way to
>address those placards automatically, but haven’t tried as of yet.  I
>wonder if I could do it with a rule/scripting . . . .
>
>
>From: Vijaykumar Jain 
>Date: Monday, June 7, 2021 at 7:06 AM
>To: pgsql-general 
>Subject: bottom / top posting
>
>Think Before You Click: This email originated outside our organization.
>
>I hear a lot of suggestions wrt bottom / top posting.
>
>only when i read this,
>PostgreSQL - general - Top posting | Threaded View
>(postgresql-archive.org)
>
>I got a feeling it sounded rude to the top post, despite me not even
>having an iota of intention to think that way. I was mostly used to
>tickets or chat. but as with chats, using CAPS was not known to me as
>shouting for a long time, when someone pointed out to me this is what
>others feel when they read.
>
>Will keep this in mind, for me as well as others who are not aware of
>the same.
>
>--
>Thanks,
>Vijay
>Mumbai, India




Re: Even more OT: Ditto machines [was: bottom / top posting]

2021-06-10 Thread Francisco Olarte
On Thu, Jun 10, 2021 at 10:52 PM Dean Gibson (DB Administrator)
 wrote:
> Tom mentioned "old-timers."  Remember "Ditto" machines?  Remember the odor?  
> They were in common use when I was in high school (1960).
> https://en.wikipedia.org/wiki/Mimeograph

I was in the spanish equivalent of high school in 1980 and although
there were photocopiers they were still used a lot. IIRC they were
much cheaper per copy, and were commonly used for exams and similar
high volume things. And in those years spain still lagged a lot behind
europe / usa.

Francisco Olarte.




Re: bottom / top posting

2021-06-10 Thread Francisco Olarte
On Wed, Jun 9, 2021 at 11:42 PM Dean Gibson (DB Administrator)
 wrote:
...
> Top-posting has been the predominantly common practice in the business & 
> government world for decades,
& it is easy to adapt to.  Just like HTML eMail (within reason) & more
than 80 columns on a line.

Yeah, but I disliked it even before HTML was invented. 80 columns is a
different kind of problem, and this days nearly every mua does wraping
either on send or on display.

The problem I found is top is easier when discussing simple things and
doing ping-pong conversations. For the mails in this list, which I
read daily more or less, and which have convoluted discussions, I find
it , especially the untrimed versions normally used by top posters
much harder.

> Somehow, millions of ordinary people are able to adapt to this, on very 
> popular network eMail providers, like Google groups & groups.io, as well as 
> their work environment.

Oh, I can adapt. I can process it, and in fact I faithfully copy the
sender style when at work, after all I'm paid for it. But on things
like this, technical discussions which I read ocasionally, I
appreciate being able to read just a half page top down without
bouncing. And you bounce a lot.

> I suppose it comes from the practice in those environs when paper memos were 
> the norm, & if you needed to attach the contents of other paper memos to your 
> own for context, you stapled them to the BACK of your own.

Because the alternative was to rewrite?



> I wonder about the tolerance of the world we live in.  Somehow, I can deal 
> with top-posting, bottom-posting, middle-posting, HTML eMail, straight-text 
> eMails, 80-column eMails, variable-width eMails, occasional ALL CAPS eMails, 
> & stupid multi-line signatures, all without getting my tail in a knot over it.

I tolerate it, but my tolerance levels differ. I can deal with all of
that, and try to not complain too much, but nobody is a good self
judge. Currently I open the list in gmail 4k screen, lateral tabs,
fits about a full page of text easily, if I cannot understand a
message from a glance I normally just delete it. I'm not forcing
anyone ro repost it. And the longer this things appear the less I try
to do it. People are free to use any style, I'm free to discard the
ones I do not like.

> But then, I was VERY successful in my software development career, consulting 
> at about 30 companies (now retired).  Maybe working with others without 
> conflict on silly issues, had something to do with it.

That is work. I do not do consulting, and my people skills are not too
good, but I haven't had complains and I certainly do not try to make
people adopt any style, I just do tit for tat and go on. I care about
the quality of postings in this lists, I do not care at all about the
quality of mails I receive at work. My employer probably cares more,
as he pays me to read them and would like for me to spend as little
time as possible.

Francisco Olarte.




Postgresql crashdown and recovery failure

2021-06-10 Thread xiebin (F)
Hi,

My database (Postgresql 12.5) server uses remote storage as data directory.

I powered off the database server and started postgresql (same version) on 
another server, with same data. However it failed. I checked wal logs and found 
that the content around latest checkpoint is empty. (about 1700 zero bytes)

# pg_controldata -D .
pg_control version number:1201
Catalog version number:   201909212
Database system identifier:   6967594878781603854
Database cluster state:   in production
pg_control last modified: Wed 09 Jun 2021 04:23:24 PM CST
Latest checkpoint location:   0/9893978
Latest checkpoint's REDO location:0/9893940
Latest checkpoint's REDO WAL file:00010009
Latest checkpoint's TimeLineID:   1
...

# pg_waldump -p ./pg_wal/ -s 0/9893900
first record is after 0/9893900, at 0/9893908, skipping over 8 bytes
rmgr: Standby len (rec/tot): 50/50, tx:  0, lsn: 
0/09893908, prev 0/098938E0, desc: RUNNING_XACTS nextXid 30176 
latestCompletedXid 30175 oldestRunningXid 30176
pg_waldump: fatal: error in WAL record at 0/9893908: invalid record length at 
0/9893940: wanted 24, got 0

# hexdump -C ./pg_wal/00010009 -s 0x89393a -n 1
0089393a  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
00893ffa  00 00 00 00 00 00 01 d1  05 00 01 00 00 00 00 40  |...@|
0089400a  89 07 00 00 00 00 4b 00  00 00 00 00 00 00 00 00  |..K.|
...

It seems the checkpoint record in wal buffer is not flushed to disk, leading to 
recovery failure.

The remote storage support Posix IO, meaning the data would be written to disk 
as long as postgresql called fdatasync (or other similar system calls). Please 
notice the storage server has not been shut down.

Besides, I traced the system calls of postgresql checkpoint (triggered by "psql 
-c checkpoint").

# strace -fvy -s 64 pg_ctl -D /tmp/data start 2>&1 | grep -E 'open|sync'
...

[pid 109705] openat(AT_FDCWD, "pg_xact/", O_RDWR|O_CREAT, 0600 
[pid 109705] <... openat resumed> ) = 3
[pid 109705] fsync(3 
[pid 109705] <... fsync resumed> )  = 0
[pid 109705] openat(AT_FDCWD, "pg_xact", O_RDONLY) = 3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_commit_ts", O_RDONLY) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_subtrans/", O_RDWR|O_CREAT, 0600) = 
3
[pid 109705] openat(AT_FDCWD, "pg_multixact/offsets/", O_RDWR|O_CREAT, 
0600) = 3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_multixact/offsets", O_RDONLY) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_multixact/members", O_RDONLY) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical/snapshots", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[pid 109705] openat(AT_FDCWD, "pg_logical/mappings", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint.tmp", 
O_WRONLY|O_CREAT|O_EXCL, 0600) = 
3
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint.tmp", O_RDWR) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint", O_RDWR) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint", O_RDWR) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical", O_RDONLY) = 3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_wal/0001000B", O_RDWR 

[pid 109705] <... openat resumed> ) = 
3
[pid 109705] fdatasync(3   (Let's say it is A)
[pid 109705] <... fdatasync resumed> )  = 0
[pid 109705] openat(AT_FDCWD, "/tmp/data/global/pg_control", O_RDWR 
[pid 109705] <... openat resumed> ) = 4
[pid 109705] fsync(4   
 (Let's say it is B)
[pid 109705] <... fsync resumed> )  = 0
[pid 109705] openat(AT_FDCWD, "pg_wal", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4
[pid 109705] openat(AT_FDCWD, "pg_subtrans", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4
[pid 109707] fdatasync(3) = 0
(Let's say it is C)

...

>From the output, I can see that data files, pg_control file and wal log are 
>flushed to disk. And I noticed that pg_control file is flushed between 2 wal 
>flush (as noted A,B,C).

Here are my questions:

1.  If power off occurs between B and C. Would it be possible that checkpoint 
was written to pg_control but not to wal log? And as a consequence, postgresql 
recovery would fail?

2.  Is there any other reason causing this problem?

3.  Is there any way to recover the data? I know pg_resetwal can reset wal log, 
then postgresql can start. But undoubtedly it could cause some data loss.

Thanks,
Xie Bin


Postgresql crashdown and recovery failure

2021-06-10 Thread xiebin (F)
Hi,

My database (Postgresql 12.5) server uses remote storage as data directory.

I powered off the database server and started postgresql (same version) on 
another server, with same data. However it failed. I checked wal logs and found 
that the content around latest checkpoint is empty. (about 1700 zero bytes)

# pg_controldata -D .
pg_control version number:1201
Catalog version number:   201909212
Database system identifier:   6967594878781603854
Database cluster state:   in production
pg_control last modified: Wed 09 Jun 2021 04:23:24 PM CST
Latest checkpoint location:   0/9893978
Latest checkpoint's REDO location:0/9893940
Latest checkpoint's REDO WAL file:00010009
Latest checkpoint's TimeLineID:   1
...

# pg_waldump -p ./pg_wal/ -s 0/9893900
first record is after 0/9893900, at 0/9893908, skipping over 8 bytes
rmgr: Standby len (rec/tot): 50/50, tx:  0, lsn: 
0/09893908, prev 0/098938E0, desc: RUNNING_XACTS nextXid 30176 
latestCompletedXid 30175 oldestRunningXid 30176
pg_waldump: fatal: error in WAL record at 0/9893908: invalid record length at 
0/9893940: wanted 24, got 0

# hexdump -C ./pg_wal/00010009 -s 0x89393a -n 1
0089393a  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
00893ffa  00 00 00 00 00 00 01 d1  05 00 01 00 00 00 00 40  
|...@|
0089400a  89 07 00 00 00 00 4b 00  00 00 00 00 00 00 00 00  |..K.|
...

It seems the checkpoint record in wal buffer is not flushed to disk, leading to 
recovery failure.

The remote storage support Posix IO, meaning the data would be written to disk 
as long as postgresql called fdatasync (or other similar system calls). Please 
notice the storage server has not been shut down.

Besides, I traced the system calls of postgresql checkpoint (triggered by "psql 
-c checkpoint").

# strace -fvy -s 64 pg_ctl -D /tmp/data start 2>&1 | grep -E 'open|sync'
...

[pid 109705] openat(AT_FDCWD, "pg_xact/", O_RDWR|O_CREAT, 0600 
[pid 109705] <... openat resumed> ) = 3
[pid 109705] fsync(3 
[pid 109705] <... fsync resumed> )  = 0
[pid 109705] openat(AT_FDCWD, "pg_xact", O_RDONLY) = 3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_commit_ts", O_RDONLY) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_subtrans/", O_RDWR|O_CREAT, 0600) = 
3
[pid 109705] openat(AT_FDCWD, "pg_multixact/offsets/", O_RDWR|O_CREAT, 
0600) = 3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_multixact/offsets", O_RDONLY) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_multixact/members", O_RDONLY) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical/snapshots", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[pid 109705] openat(AT_FDCWD, "pg_logical/mappings", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint.tmp", 
O_WRONLY|O_CREAT|O_EXCL, 0600) = 
3
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint.tmp", O_RDWR) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint", O_RDWR) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical/replorigin_checkpoint", O_RDWR) = 
3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_logical", O_RDONLY) = 3
[pid 109705] fsync(3) = 0
[pid 109705] openat(AT_FDCWD, "pg_wal/0001000B", O_RDWR 

[pid 109705] <... openat resumed> ) = 
3
[pid 109705] fdatasync(3   (Let's say it is A)
[pid 109705] <... fdatasync resumed> )  = 0
[pid 109705] openat(AT_FDCWD, "/tmp/data/global/pg_control", O_RDWR 
[pid 109705] <... openat resumed> ) = 4
[pid 109705] fsync(4   
 (Let's say it is B)
[pid 109705] <... fsync resumed> )  = 0
[pid 109705] openat(AT_FDCWD, "pg_wal", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4
[pid 109705] openat(AT_FDCWD, "pg_subtrans", 
O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4
[pid 109707] fdatasync(3) = 0
(Let's say it is C)

...

>From the output, I can see that data files, pg_control file and wal log are 
>flushed to disk. And I noticed that pg_control file is flushed between 2 wal 
>flush (as noted A,B,C).

Here are my questions:

1.  If power off occurs between B and C. Would it be possible that checkpoint 
was written to pg_control but not to wal log? And as a consequence, postgresql 
recovery would fail?

2.  Is there any other reason causing this problem?

3.  Is there any way to recover the data? I know pg_resetwal can reset wal log, 
then postgresql can start. But undoubtedly it could cause some data loss.

Thanks,
Xie Bin


Re: Postgresql crashdown and recovery failure

2021-06-10 Thread Ron

On 6/10/21 10:30 PM, xiebin (F) wrote:


Hi,

My database (Postgresql 12.5) server uses remote storage as data directory.

I powered off the database server and started postgresql (same version) on 
another server, with same data. However it failed. I checked wal logs and 
found that the content around latest checkpoint is empty. (about 1700 zero 
bytes)



[snip]


From the output, I can see that data files, pg_control file and wal log 
are flushed to disk. And I noticed that pg_control file is flushed between 
2 wal flush (as noted A,B,C).


Here are my questions:

1.If power off occurs between B and C. Would it be possible that 
checkpoint was written to pg_control but not to wal log? And as a 
consequence, postgresql recovery would fail?


2.Is there any other reason causing this problem?



Is your Postgresql instance configured to use fsync?

3.Is there any way to recover the data? I know pg_resetwal can reset wal 
log, then postgresql can start. But undoubtedly it could cause some data loss.




--
Angular momentum makes the world go 'round.


Re: Even more OT: Ditto machines [was: bottom / top posting]

2021-06-10 Thread Adrian Ho

On 11/6/21 4:52 am, Dean Gibson (DB Administrator) wrote:
Tom mentioned "old-timers."  Remember "Ditto" machines?  Remember the 
odor?  They were in common use when I was in high school (1960).


https://en.wikipedia.org/wiki/Mimeograph

Was intimately acquainted with them during my military service. Had the 
"privilege" of cutting the majority of each day's stencils, because I 
was the fastest typist in the clerical pool...and because I was the only 
"freak" who actually found mental peace while literally cranking out 
daily orders.


--
Best Regards,
Adrian