Primary keys and composite unique keys(basic question)

2021-03-31 Thread Mohan Radhakrishnan
Hello,
 We have UUIDs in our tables which are primary keys. But in
some cases
we also identify a composite unique key apart from the primary key.

My assumption is that there should be a unique key index created by us
using the composite key. And when we fetch using this composite key instead
of the primary key we have a performance boost due to the index.

Is this right ? Are there more details I should read to understand this
better ? Please point. If I should use the query planner to look at the
statistics I will.
.
Thanks,
Mohan


Looking for some help with HA / Log Log-Shipping

2021-03-31 Thread Laurent FAILLIE
Hello,
We are running Postgresql 12 and I'm trying to put in place streaming wal 
replication.

I followed officiale documentation ( 
https://www.postgresql.org/docs/12/warm-standby.html ) as well as this tutorial 
: 
https://scalegrid.io/blog/managing-high-availability-in-postgresql-part-1/  
  
What I did is :
* created the master db* create replication role* updated postgresql.conf* 
finally replicate everything on the salve node using pg_basebackup (with -R 
option)

But now I'm lost with §26.2.4 Setting up a standby server.
In which file I have to put the primary_conninfo and other options ?
Thanks
Laurent
ps: my goal is to install the PAF


Re: BDR question on dboid conflicts

2021-03-31 Thread higher_ground
I actually just recently encountered this very same problem when calling
bdr_group_join().  The dboid generated is the same as that of an existing
node, and the tuple (sysid, timeline, dboid) is the same as well.  

I saw this manifest two different ways in the logs:

A)
< 2021-03-30 02:42:56.942 UTC >FATAL:  could not send replication command
"CREATE_REPLICATION_SLOT "bdr_16386_6924805489516289687_1_17615__" LOGICAL
bdr": status PGRES_FATAL_ERROR: ERROR:  replication slot
"bdr_16386_6924805489516289687_1_17615__" already exists

B) 
< 2021-03-30 21:02:29.260 UTC >LOG:  Creating replica with:  <…>
Restoring dump to <…>
< 2021-03-30 21:02:31.929 UTC >ERROR:  duplicate key value violates unique
constraint "bdr_nodes_pkey"
< 2021-03-30 21:02:31.929 UTC >DETAIL:  Key (node_sysid, node_timeline,
node_dboid)=(6924805489516289687, 1, 17615) already exists.

I did not see this issue previously on an earlier version of the OS we are
using.  The Postgres/BDR version has not changed either.

It seems that (on this platform, for the experiments I’ve tried thus far)
‘17615’ is always generated as the first dboid of an added node, hence the
conflict.  When we remove the node and try again, another dboid is
predictably tried.  In general (except for the addition of the 2nd node,
which is always successful), for the Nth node added, (N-1) tries are always
needed to ensure a unique dboid (and a unique tuple).

At this point it would be great if there is a way to avoid this
programmatically.  It seems that I can only detect this error condition in
the logs.  The bdr_group_join() call itself does not return error.

Is there a way to make the sysid, timeline, or dboid unique?

Thank you very much for your help.




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




JSON operator feature request

2021-03-31 Thread david . turon


Good morning,

we have everywhere in code:
 COALESCE(json_data, '{}'::json(b)) || json(b)_build_object(...)
or something like
IF json_data IS NULL THEN 


I made some code refactoring with

CREATE OR REPLACE FUNCTION public.jsonb_concat_null(a jsonb, b jsonb)
RETURNS jsonb AS
$$

SELECT
CASE
WHEN a IS NOT NULL AND b IS NOT NULL
THEN a || b
WHEN a IS NULL AND b IS NULL
THEN NULL
WHEN a IS NULL
THEN b
WHEN b IS NULL
THEN a
ELSE
NULL

END;

$$ LANGUAGE SQL IMMUTABLE;


CREATE OPERATOR public.||| (
PROCEDURE = public.jsonb_concat_null,
LEFTARG = JSONB,
RIGHTARG = JSONB,
COMMUTATOR =  OPERATOR(public.|||)
);


So now is possible join json data with NULL:

json_data || NULL -> NULL
json_data ||| NULL -> json_data

without coalesce or other NOT NULL check in code. Maybe can be usefull have
this operator in pg? Code can be written better than my and in C. Anybody
have same issue with concat json and NULL?

David T.
--
-
Ing. David TUROŇ
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava

tel.:+420 591 166 224
fax:+420 596 621 273
mobil:  +420 732 589 152
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: ser...@linuxbox.cz
-

pg_rewind

2021-03-31 Thread Alexey Bashtanov

Hi,

I'm trying to get my get my head around pg_rewind.
Why does it need full_page_writes and wal_log_hints on the target?
As far as I could see it only needs old target WAL to see what pages 
have been touched since the last checkpoint before diverge point.

Why can't it get this data from partial pages?

Best, Alex




Re: Primary keys and composite unique keys(basic question)

2021-03-31 Thread Tom Lane
Mohan Radhakrishnan  writes:
>  We have UUIDs in our tables which are primary keys. But in
> some cases
> we also identify a composite unique key apart from the primary key.

> My assumption is that there should be a unique key index created by us
> using the composite key. And when we fetch using this composite key instead
> of the primary key we have a performance boost due to the index.

You haven't provided a lot of detail, but use-a-UUID-as-a-primary-key
is often an antipattern.  The UUIDs are quasi-random, meaning there's
no locality of reference in the primary key index, resulting in
inefficiency in searches and insertions.  If the composite key you
mention has some actual relationship to your application's usage
patterns, it could be winning as a result of better locality of
access to that index.

regards, tom lane




Re: Primary keys and composite unique keys(basic question)

2021-03-31 Thread Mohan Radhakrishnan
I will cover the UUIDs first. They are indispensable to us.
1. The data is distributed over regions So we need the row to be unique.
2. This distributed data is sent to services as events. That is the
application architecture.

But we don't search using UUIDs always. Only when data from another
distributed service
is received we need them and in such cases we have to join using them.

But for local data we can identify another composite unique key. Does
PostgreSql
create a unique index for us ? What about a FK that references this
composite
unique key ? Does it create a FK index ?


Thank you.

On Wed, Mar 31, 2021 at 7:42 PM Tom Lane  wrote:

> Mohan Radhakrishnan  writes:
> >  We have UUIDs in our tables which are primary keys. But in
> > some cases
> > we also identify a composite unique key apart from the primary key.
>
> > My assumption is that there should be a unique key index created by us
> > using the composite key. And when we fetch using this composite key
> instead
> > of the primary key we have a performance boost due to the index.
>
> You haven't provided a lot of detail, but use-a-UUID-as-a-primary-key
> is often an antipattern.  The UUIDs are quasi-random, meaning there's
> no locality of reference in the primary key index, resulting in
> inefficiency in searches and insertions.  If the composite key you
> mention has some actual relationship to your application's usage
> patterns, it could be winning as a result of better locality of
> access to that index.
>
> regards, tom lane
>


Re: Looking for some help with HA / Log Log-Shipping

2021-03-31 Thread Laurent FAILLIE
Replying to myself :)
It seems pg_basebackup did all the tricks, even restarting the slave. And it is 
in standby mode.Do I have anything to do in addition ?
Thanks
 

Le mercredi 31 mars 2021 à 12:51:29 UTC+2, Laurent FAILLIE 
 a écrit :  
 
 Hello,
We are running Postgresql 12 and I'm trying to put in place streaming wal 
replication.

I followed officiale documentation ( 
https://www.postgresql.org/docs/12/warm-standby.html ) as well as this tutorial 
: 
https://scalegrid.io/blog/managing-high-availability-in-postgresql-part-1/  
  
What I did is :
* created the master db* create replication role* updated postgresql.conf* 
finally replicate everything on the salve node using pg_basebackup (with -R 
option)

But now I'm lost with §26.2.4 Setting up a standby server.
In which file I have to put the primary_conninfo and other options ?
Thanks
Laurent
ps: my goal is to install the PAF
  

standby fails with out-of-order XID insertion

2021-03-31 Thread Radoslav Nedyalkov
Hi all,
So we have master(pg11.8) and standbyA(pg11.11), 24T data, quite busy
data-warehouse on Amazon Linux.
We've built a standbyB(pg11.11) by creating basebackup  from standbyA and
collecting wals from the master.
When  basebackup completed we started recovery from collected wals.
When standbyB caught-up we tried to switch to streaming.
1. created slot on the master
2. activated slot (with short pg_receivewal invocation)
3. did a checkpoint on the master (not sure why)
4. cancelled pg_receivewal.
5. stopped the standbyB and started it. Startup fails with
FATAL:  out-of-order XID insertion in KnownAssignedXids
More log attached.

after switching back to log-shipping same error occurs.

Any clues why/what is the error ?

Thanks and regards,
Radoslav
2021-03-31 14:22:36 UTC [41623] 0: [8-1] user=,db= host=,app=[] LOG:  received 
fast shutdown request
2021-03-31 14:22:36 UTC [41623] 0: [9-1] user=,db= host=,app=[] LOG:  aborting 
any active transactions
2021-03-31 14:22:37 UTC [41642] 0: [1483-1] user=,db= host=,app=[] LOG:  
restartpoint complete: wrote 753696 buffers (2.9%); 0 WAL file(s) added, 0 
removed, 345 recycled; write=122.672 s, sync=0.063 s, total=122.871 s; sync 
files=2588, longest=0.001 s, average=0.001 s; distance=6276805 kB, 
estimate=7385309 kB
2021-03-31 14:22:37 UTC [41642] 0: [1484-1] user=,db= host=,app=[] LOG:  
recovery restart point at A8EA1/298E6BB8
2021-03-31 14:22:37 UTC [41642] 0: [1485-1] user=,db= host=,app=[] DETAIL:  
Last completed transaction was at log time 2021-03-31 14:22:35.091945+00.
2021-03-31 14:22:38 UTC [41642] 0: [1486-1] user=,db= host=,app=[] LOG:  
shutting down
2021-03-31 14:22:38 UTC [41642] 0: [1487-1] user=,db= host=,app=[] LOG:  
restartpoint starting: shutdown immediate
2021-03-31 14:22:41 UTC [41642] 0: [1488-1] user=,db= host=,app=[] LOG:  
restartpoint complete: wrote 304044 buffers (1.2%); 0 WAL file(s) added, 0 
removed, 0 recycled; write=3.318 s, sync=0.027 s, total=3.348 s; sync 
files=2154, longest=0.001 s, average=0.001 s; distance=4314192 kB, 
estimate=7078198 kB
2021-03-31 14:22:41 UTC [41642] 0: [1489-1] user=,db= host=,app=[] LOG:  
recovery restart point at A8EA2/30DFAEA8
2021-03-31 14:22:41 UTC [41642] 0: [1490-1] user=,db= host=,app=[] DETAIL:  
Last completed transaction was at log time 2021-03-31 14:22:35.091945+00.
2021-03-31 14:22:41 UTC [41623] 0: [10-1] user=,db= host=,app=[] LOG:  database 
system is shut down
2021-03-31 14:23:03 UTC [84999] 0: [1-1] user=,db= host=,app=[] LOG:  database 
system was shut down in recovery at 2021-03-31 14:22:41 UTC
2021-03-31 14:23:03 UTC [84999] 0: [2-1] user=,db= host=,app=[] LOG:  entering 
standby mode
2021-03-31 14:23:03 UTC [84999] 0: [3-1] user=,db= host=,app=[] LOG:  recovered 
replication state of node 1 to 526/409B7D28
2021-03-31 14:23:03 UTC [84999] 0: [4-1] user=,db= host=,app=[] LOG:  recovered 
replication state of node 2 to 8B75/A97FC4E0
2021-03-31 14:23:03 UTC [84999] 0: [5-1] user=,db= host=,app=[] LOG:  recovered 
replication state of node 19 to 580/B02F81A8
2021-03-31 14:23:03 UTC [84999] 0: [6-1] user=,db= host=,app=[] LOG:  recovered 
replication state of node 6 to 580/B03B13C0
2021-03-31 14:23:03 UTC [84999] 0: [7-1] user=,db= host=,app=[] LOG:  recovered 
replication state of node 5 to 228D/B578FBA8
2021-03-31 14:23:03 UTC [84999] 0: [8-1] user=,db= host=,app=[] LOG:  recovered 
replication state of node 3 to 5F4/6B0D7560
2021-03-31 14:23:03 UTC [84999] 0: [9-1] user=,db= host=,app=[] LOG:  recovered 
replication state of node 7 to 65F/9971C210
2021-03-31 14:23:03 UTC [84999] 0: [10-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 9 to 65F/B19E5C38
2021-03-31 14:23:03 UTC [84999] 0: [11-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 10 to 3D8/2E753F28
2021-03-31 14:23:03 UTC [84999] 0: [12-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 12 to 526/40791D18
2021-03-31 14:23:03 UTC [84999] 0: [13-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 13 to 521/E87DE3B8
2021-03-31 14:23:03 UTC [84999] 0: [14-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 4 to 526/409B7EE8
2021-03-31 14:23:03 UTC [84999] 0: [15-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 15 to 65F/B1817750
2021-03-31 14:23:03 UTC [84999] 0: [16-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 14 to 200/20004088
2021-03-31 14:23:03 UTC [84999] 0: [17-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 17 to 63F/5CF62898
2021-03-31 14:23:03 UTC [84999] 0: [18-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 18 to 96D/9D63BD30
2021-03-31 14:23:03 UTC [84999] 0: [19-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 16 to DA/43CB4728
2021-03-31 14:23:03 UTC [84999] 0: [20-1] user=,db= host=,app=[] LOG:  
recovered replication state of node 20 to 580/B03AFB58
2021-03-31 14:23:03 UTC [84999] 0: [21-1] user=,db= host=,app=[] L

RE: Issues with using plpgsql debugger using PG13 on Centos 7

2021-03-31 Thread Jain, Ankit
Thank you for the pointer. We were able to build it correctly after setting the 
PG-13 path
PATH=$PATH:/usr/pgsql-13/bin

From: Ian Lawrence Barwick 
Sent: Tuesday, February 2, 2021 2:54 AM
To: Jain, Ankit 
Cc: pgsql-gene...@postgresql.org
Subject: Re: Issues with using plpgsql debugger using PG13 on Centos 7


CAUTION: This email originated from outside of Snap-on. Do not click on links 
or open attachments unless you have validated the sender, even if it is a known 
contact. Contact the sender by phone to validate the contents.
2021年2月2日(火) 12:06 Jain, Ankit 
mailto:ankit.j...@snapon.com>>:
(...)
But got the following error –
ERROR: could not load library "/usr/pgsql-13/lib/plugin_debugger.so": 
/usr/pgsql-13/lib/plugin_debugger.so: undefined symbol: LWLockAssign SQL state: 
58P01

Can you please help with getting the debugger working ?

It looks like the extension code was compiled against an older PostgreSQL
version (presumably 9.5 or earlier; LWLockAssign was removed in 9.6).

You'll need to make sure it's compiled against the major version you're using.

Regards

Ian Barwick

--
EnterpriseDB: 
https://www.enterprisedb.com



Re: Primary keys and composite unique keys(basic question)

2021-03-31 Thread Michael Lewis
Etiquette on these lists is to reply in line or below the relevant portion,
not top-post with full quoting like default gmail behavior.

On Wed, Mar 31, 2021 at 9:18 AM Mohan Radhakrishnan <
radhakrishnan.mo...@gmail.com> wrote:

> But we don't search using UUIDs always. Only when data from another
> distributed service
> is received we need them and in such cases we have to join using them.
>

I haven't used them so I don't recall exactly, but I believe there is a
type of UUID generation which has some leading correlation to time which
would help with reducing the random I/O issue that Tom Lane mentioned. A
quick search of the archive may lead you to that, or someone else may chime
in with the name I expect.


> But for local data we can identify another composite unique key. Does
> PostgreSql
> create a unique index for us ? What about a FK that references this
> composite
> unique key ? Does it create a FK index ?
>

It is up to you to create whichever fkeys and indexes you require.


Re: Debugging leaking memory in Postgresql 13.2/Postgis 3.1

2021-03-31 Thread Stephan Knauss

On 30.03.2021 20:46, Tom Lane wrote:

Stephan Knauss  writes:

The wiki suggested to dump MemoryContext states for more details, but
something strange happens when attaching gdb. It seems that the process
is immediately killed and I can no longer dump such details.

(I think the -v option is the one that matters on Linux, not -d
as you might guess).  The idea here is that the backends would
get an actual ENOMEM failure from malloc() before reaching the
point where the kernel's OOM-kill behavior takes over.  Given
that, they'd dump memory maps to stderr of their own accord,
and you could maybe get some insight as to what's leaking.
This'd also reduce the severity of the problem when it does
happen.


Hello Tom, the output below looks similar to the OOM output you 
expected. Can you give a hint how to interpret the results?


I had a backend which had a larger amount of memory allocated already. 
So I gave "gcore -a" a try.


In contrast to the advertised behavior, the process did not continue to 
run but I got a core file at least. Probably related to gcore just 
calling gdb attach which somehow triggers a SIGKILL of all backends.


With 4.2GB in size it hopefully has most of the relevant memory 
structures are there. Without a running process I still can not call 
MemoryContextStats(), but I found a macro which claims to decode the 
memory structure post mortem:


https://www.cybertec-postgresql.com/en/checking-per-memory-context-memory-consumption/


This gave me the following memory structure:

How should it be interpreted? It looks like the size is bytes as it 
calculates with pointers. But the numbers look a bit small, given that I 
had a backend with roughly 6GB RSS memory.


I thought it might print overall size and then indent and print the 
memory of children, but the numbers do indicate this is not the case, 
having a higher level smaller size than children:


  CachedPlanSource: 67840
   unnamed prepared statement: 261920

So how to read it and any indication why I have a constantly increasing 
memory footprint? Is there any indication where multiple gigabytes are 
allocated?




root@0ec98d20bda2:/# gdb /usr/lib/postgresql/13/bin/postgres core.154218 

GNU gdb (Debian 8.2.1-2+b3) 8.2.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
    .

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/lib/postgresql/13/bin/postgres...Reading 
symbols from 
/usr/lib/debug/.build-id/31/ae2853776500091d313e76cf679017e697884b.debug...done.

done.

warning: core file may not match specified executable file.
[New LWP 154218]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: osm gis 172.20.0.3(51894) idle'.
#0  0x7fc01cfa07b7 in epoll_wait (epfd=4, events=0x55f403584080, 
maxevents=maxevents@entry=1, timeout=timeout@entry=-1) at 
../sysdeps/unix/sysv/linux/epoll_wait.c:30
30  ../sysdeps/unix/sysv/linux/epoll_wait.c: 
No such file or directory.
(gdb)  > > >>>(gdb) (gdb)  > > > > > >>(gdb) (gdb) 
TopMemoryContext: 109528

 dynahash: 7968
 HandleParallelMessages: 7968
 dynahash: 7968
 dynahash: 7968
 dynahash: 7968
 dynahash: 24392
 dynahash: 24352
 RowDescriptionContext: 24352
 MessageContext: 7968
 dynahash: 7968
 dynahash: 32544
 TransactionAbortContext: 32544
 dynahash: 7968
 TopPortalContext: 7968
 dynahash: 16160
 CacheMemoryContext: 1302944
  CachedPlan: 138016
  CachedPlanSource: 67840
   unnamed prepared statement: 261920
  index info: 1824
  index info: 1824
  index info: 3872
  index info: 1824
  index info: 1824
  index info: 3872
  index info: 3872
  index info: 3872
  index info: 1824
  index info: 3872
  relation rules: 32544
  index info: 1824
  index info: 1824
  index info: 1824
  index info: 3872
  relation rules: 24352
  index info: 3872
  index info: 3872
  index info: 1824
  index info: 3872
  index info: 3872
  index info: 3872
  index info: 1824
  index info: 3872
  index info: 1824
  index info: 3872
  relation rules: 32544
  index info: 1824
  index info: 2848
  index info: 1824
  index info: 3872
  index info: 3872
  index info: 3872
  index info: 3872
  index info: 3872
  index info: 3872
  index info: 3872
  index info: 1824
  index info: 3872
  index info: 1824
  index info: 1824
  relation rules: 32544
  index info: 1824
  index info: 2848
  index info: 1824
 

Copy Statistics Tables During Upgrade

2021-03-31 Thread Virendra Kumar
Hello Team,
I was doing an upgrade of one of our PG (9.6.11 to 11.9) database and came 
across the question that what is reason PostgreSQL is not doing copy of stats 
and we have to analyze right after upgrade. There are two situations where this 
will help when we use this DB as our enterprise database:
1. If DB is in magnitude of hundreds of TB and we have to do analyze it might 
take several hours before we handover this to application services.
2. We are loosing critical information like most common values and most common 
freqs because these values are populated over time when many-many sessions hit 
table and queries with different values of a column.
Any input on this is higly appreciated.


Regards,
Virendra Kumar



Re: Debugging leaking memory in Postgresql 13.2/Postgis 3.1

2021-03-31 Thread Tom Lane
Stephan Knauss  writes:
> Hello Tom, the output below looks similar to the OOM output you 
> expected. Can you give a hint how to interpret the results?

Looks like the answer is that wherever the leak is, it's not accounted
for by this info; none of those contexts are particularly large.

Based on nearby threads, it occurs to me to ask whether you have JIT
enabled, and if so whether turning it off helps.  There seems to be
a known leak of the code fragments generated by that in some cases.

If that's not it, then the leak must be accumulating through plain
old malloc calls.  There's not much of that in the core backend
(although if you use ispell text search dictionaries, maybe [1] is
relevant), so my suspicions would next fall on any extensions you
might be using.

regards, tom lane

[1] https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=eba939551




Re: Debugging leaking memory in Postgresql 13.2/Postgis 3.1

2021-03-31 Thread Paul Ramsey



> On Mar 31, 2021, at 11:24 AM, Tom Lane  wrote:
> 
> Stephan Knauss  writes:
>> Hello Tom, the output below looks similar to the OOM output you 
>> expected. Can you give a hint how to interpret the results?
> 
> Looks like the answer is that wherever the leak is, it's not accounted
> for by this info; none of those contexts are particularly large.
> 
> Based on nearby threads, it occurs to me to ask whether you have JIT
> enabled, and if so whether turning it off helps.  There seems to be
> a known leak of the code fragments generated by that in some cases.
> 
> If that's not it, then the leak must be accumulating through plain
> old malloc calls.  There's not much of that in the core backend
> (although if you use ispell text search dictionaries, maybe [1] is
> relevant), so my suspicions would next fall on any extensions you
> might be using.

Would be interested in the queries being run. We have a reproduceable leak in 
<-> geography operator that we have been unable to track down. 

P

> 
>   regards, tom lane
> 
> [1] 
> https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=eba939551
> 
> 





Failed to initialize 13.2-1 cluster on Windows

2021-03-31 Thread Martin
Hello, 


I'm trying to install 13.2-1 on Windows 10. Previous version was 10.5,
installed using the EnterpriseDB installer, without any problems. That
version was completely removed before attempting to install 13.2. 


EnterpriseDB installer failed to initialize the cluster. The error says
that initdb can find postgres executable but it's of a different
version. I've read some threads that described permission problems, but
I couldn't solve it. What I've tried: 

- Installing everything by default (including all default folders) 


- Installing on different folders, on C: and D: disks, outside of
Program Files. 

- Installer is run with "Run as administrator" option 


- Tried to execute "initdb -D path/to/data/folder" manually after the
installer failed. Same message 


- Manually altered permission on installation folder, giving local users
full control on the whole folder (programs and data) 


- Tried the zip file without installer. Uncompressed everything on a
folder outside program files and running initdb by hand. Error: "initdb:
error: El programa «postgres» fue encontrado por
«C:\Apps\PostgreSQL\13\bin/initdb», pero no es de la misma versión que
initdb." BTW, I don't know why messages are in spanish. Although I'm
located in Argentina, my windows and every program is installed in
english, and english is selected as primary language. But, nevertheless,
that's unimportant. Translation is "postgres program was found by
initdb, but it's a different version". 


- When executing postgres.exe directly, it says "FATAL: XX000: postgres:
no se pudo localizar el ejecutable postgres correspondiente. UBICACIÓN:
getInstallationPaths,
d:\pginstaller_13.auto\postgres.windows-x64\src\backend\postmaster\postmaster.c:1489".
That would translate as "could not locate corresponding postgres
executable". 


I've read somewhere a recommendation to create a separate user
"postgres". Although that's common practice on Linux, I've never done it
on previous Windows installations. Would that be something to try? 


I'm running out of ideas for what else to try. Any help will be much
appreciated. 

Regards, 


Martin

Postgres connection to hot standby

2021-03-31 Thread Mutuku Ndeti
I have a third party application running on postgresql.  I have a primary
server ans secondary. I have setup the DBon primary to replicate to the
secondary.(streaming) as a hot standby.

Is it possible to have an instance of the application access the DB on
secondary in readonly mode? I have tried this and application will not run.

-- 
www.agile.co.ke


Proposed Italian Translation of Code of Conduct Policy

2021-03-31 Thread Stacey Haysler

	
	
	The
PostgreSQL Community Code of Conduct Committee has received a draft
of the Italian translation of the Code of Conduct Policy updated August 18, 2020 for review.

The
English version of the Policy is
at:
https://www.postgresql.org/about/policies/coc/

The
translation was created by Federico Campoli.

The
translation was reviewed by Ilaria Battiston.

The
proposed translation is attached as both a text file and a PDF to this message.


If you have any comments or suggestions for this translation, please bring them to our attention by 5:00 PM PDT on Wednesday, April 7, 2021. Thank you.Regards,StaceyStacey HayslerChairPostgreSQL Community Code of Conduct CommitteeIntroduzione

Il progetto PostgreSQL è orgoglioso della qualità del proprio codice,
del proprio lavoro e dei risultati, tecnici e professionali della
propria comunità. È doveroso che ognuno che partecipi alla comunità si
comporti, nell'interesse comune, in modo professionale ed educato
portando rispetto verso utenti e sviluppatori.

A tal fine è stato creato questo Codice di Comportamento (Code of
Conduct, Coc) per gestire i rapporti all'interno della comunità e la
partecipazione al lavoro dell'organizzazione su larga scala. Questo
Codice di Comportamento è pensato per coprire tutte le interazioni tra i
membri della comunità in qualsiasi contesto, a meno che non sia presente
un altro Codice di Comportamento che abbia precedenza su questo (es.
Codice di Comportamento di una conferenza).

Inclusività e comportamento appropriato

La partecipazione al progetto PostgreSQL è aperta a chiunque abbia
interesse a lavorare con PostgreSQL, indipendentemente dal livello di
esperienza con il software o con la tecnologia. Tutti sono incoraggiati
allo sviluppo software, e in generale alla contribuzione, a prescindere
dalla propria formazione.

Sono incoraggiate discussioni riflessive e costruttive, relative al
software, la comunità, lo stato attuale e le possibili evoluzioni. Il
tema delle discussioni dovrebbe comunque riguardare il codice e progetti
correlati, progetti comunitari e l'infrastruttura.

Attacchi personali e commenti negativi riguardanti caratteristiche
personali sono inaccettabili e pertanto non saranno permessi. Alcuni
esempi di caratteristiche personali sono età, razza, etnicità,
religione, genere od orientamento sessuale.

Altri esempi di comportamento che violano questo Codice di Comportamento
sono minacce di violenza verso individui o gruppi, minacce di sabotaggio
sia esso di tipo professionale comunitario o di qualunque progetto,
attenzioni sessuali indesiderate sotto qualsiasi forma, comportamenti
che possano causare danno di reputazione al progetto PostgreSQL e
rifiuto di cessare qualsiasi condotta inappropriata quando richiesto.

Ritorsioni

Sono inoltre espressamente vietate ritorsioni contro una persona che
presenti un reclamo ai sensi del presente Codice di Comportamento o
contro chiunque stia prestando assistenza alle indagini per tale
reclamo. La ritorsione può avvenire sotto forma di queste ed altre
azioni:

-   ulteriori attacchi personali (pubblici o privati);
-   azioni che minano lo status professionale e/o lo status di un
individuo nei confronti del datore di lavoro, dei colleghi, dei
clienti o della comunità;
-   azioni che minacciano la privacy, la persona fisica, il benessere,
l'abitazione dell'individuo e/o la sua famiglia.

Gli atti di ritorsione saranno trattati alla stessa maniera di qualsiasi
altra violazione del presente Codice di Comportamento.

Comitato per il Codice di Comportamento

Il Core Team di PostgreSQL nominerà un Comitato per il Codice di
Comportamento, che si occuperà di ricevere ed indagare su tutti i
reclami, e un Presidente del Comitato. Qualsiasi membro della comunità
può offrirsi volontario per far parte del Comitato, ad eccezione dei
membri del Core Team. Poiché il Core Team sovrintende al Comitato, i
membri del Core Team non faranno parte del Comitato per evitare
conflitti di interesse. L'elenco dei membri del Comitato sarà sempre
disponibile al pubblico e può essere visualizzato qui 
https://www.postgresql.org/about/policies/coc_committee/.

I membri del Comitato cambieranno ogni anno. Il Core Team o il
Presidente del Comitato annunceranno le date di apertura e chiusura del
processo annuale di selezione dei membri attraverso i consueti canali di
comunicazione della comunità.

Qualsiasi membro della comunità che desidera far parte del Comitato
dovrà completare un questionario iniziale che verrà valutato dal Core
Team e dall'attuale Comitato. I membri in carica del Comitato
esamineranno i candidati ed effettueranno colloqui se necessario. Il
Comitato uscente formulerà le raccomandazioni ed il Core Team sceglierà
i nuovi membri per il Comitato.

Il periodo di transizione tra il nuovo Comitato e quello uscente potrà
durare fino a un mese per consentire il trasferimento di informazioni e
di responsabilità.

Sebbene non esista un numero specifico di membri

Re: Failed to initialize 13.2-1 cluster on Windows

2021-03-31 Thread Martin

Update: I've just tried with version 12.6-1 and works perfectly fine.
Clearly, there is some bug in the 13.2-1 published version. 


On 2021-03-31 16:35, Martin wrote:

Hello, 

I'm trying to install 13.2-1 on Windows 10. Previous version was 10.5, installed using the EnterpriseDB installer, without any problems. That version was completely removed before attempting to install 13.2. 

EnterpriseDB installer failed to initialize the cluster. The error says that initdb can find postgres executable but it's of a different version. I've read some threads that described permission problems, but I couldn't solve it. What I've tried: 

- Installing everything by default (including all default folders) 

- Installing on different folders, on C: and D: disks, outside of Program Files. 

- Installer is run with "Run as administrator" option 

- Tried to execute "initdb -D path/to/data/folder" manually after the installer failed. Same message 

- Manually altered permission on installation folder, giving local users full control on the whole folder (programs and data) 

- Tried the zip file without installer. Uncompressed everything on a folder outside program files and running initdb by hand. Error: "initdb: error: El programa «postgres» fue encontrado por «C:\Apps\PostgreSQL\13\bin/initdb», pero no es de la misma versión que initdb." BTW, I don't know why messages are in spanish. Although I'm located in Argentina, my windows and every program is installed in english, and english is selected as primary language. But, nevertheless, that's unimportant. Translation is "postgres program was found by initdb, but it's a different version". 

- When executing postgres.exe directly, it says "FATAL: XX000: postgres: no se pudo localizar el ejecutable postgres correspondiente. UBICACIÓN: getInstallationPaths, d:\pginstaller_13.auto\postgres.windows-x64\src\backend\postmaster\postmaster.c:1489". That would translate as "could not locate corresponding postgres executable". 

I've read somewhere a recommendation to create a separate user "postgres". Although that's common practice on Linux, I've never done it on previous Windows installations. Would that be something to try? 

I'm running out of ideas for what else to try. Any help will be much appreciated. 

Regards, 


Martin

accessing cross-schema materialized views

2021-03-31 Thread Tim Clarke
We have:

create materialized view schema1.matview.
grant select on table schema1.matview to mygroup

create view schema2.usingview as select ... from schema1.matview
grant select on table schema2.using to mygroup

and yet we receive "permission denied for materialized view" on a user with the 
mygroup role selecting from schema2.usingview? The same user can select from 
schema1.matview without issue?

I must be tired and I can't see why that should fail :(

--
Tim Clarke



Telephone: Witham: +44(0)1376 503500 | London: +44 (0)20 3009 0853 | Frankfurt: 
+49 (0)69 7191 6000 | Hong Kong: +852 5803 1687 | Toronto: +1 647 503 2848
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd - A Solactive Company
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | United Kingdom



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal: Minerva Analytics is the trading name of: Minerva Analytics Ltd: 
Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here https://www.manifest.co.uk/legal/ for further information.


Re: accessing cross-schema materialized views

2021-03-31 Thread Rob Sargent

On 3/31/21 4:31 PM, Tim Clarke wrote:


We have:

    create materialized view schema1.matview.
    grant select on table schema1.matview to mygroup

    create view schema2.usingview as select ... from schema1.matview
    grant select on table schema2.using to mygroup


Is that schema2.using just a typo (phony names)?


Re: accessing cross-schema materialized views

2021-03-31 Thread Tom Lane
Tim Clarke  writes:
> We have:
> create materialized view schema1.matview.
> grant select on table schema1.matview to mygroup

> create view schema2.usingview as select ... from schema1.matview
> grant select on table schema2.using to mygroup

> and yet we receive "permission denied for materialized view" on a user with 
> the mygroup role selecting from schema2.usingview?

The owner of the schema2.usingview is the one who must have
privilege to read the underlying schema1.matview.  Our
permissions messages are, I fear, frequently not very good
about saying whose privileges were checked.

regards, tom lane




How to implement expiration in PostgreSQL?

2021-03-31 Thread Glen Huang
Hi,

I guess this question has been asked a million times, but all solutions I can 
find online don’t really work well for my case. I’ll list them here and hope 
someone can shed some light.

My use case is to implement joining clubs that require entrance fee:

1. Each clubs only allows maximum number of members.
2. When a person decides to join a club, the seat is reserved for a limited 
amount of time. If that person fails to pay within that period, the seat will 
be open again

I want to write a query that can quickly list all clubs that still have open 
seats and #2 is where I want expiration to happen.

The solutions I find so far:

1. Exclude closed clubs in queries and periodically delete expired members

I can’t come up with a query that can accomplish this in an efficient way. 

WITH seated_member AS (
SELECT
club_id,
count(member_id) AS num_seated_member
FROM member
WHERE paid OR join_time > now() - ‘1h’::interval
GROUP BY club_id
),
open_member AS (
SELECT 
club_id,
max_num_member - coalesce(num_seated_member, 0) AS 
num_open_member
FROM club LEFT JOIN seated_member USING(club_id)
)
SELECT club_id AS open_club
FROM open_member
WHERE num_open_member > 0

This requires going through all seated members, which can potentially be large 
and takes a long time.

I can of course add an num_open_member column to the club table and index it, 
but the problem becomes how to automatically update it when a member expires, 
which take us back to square one.

All following solutions assume I add this column and seek to find a way to 
update it automatically.

2. Run a cron job

This won’t work because the number is updated only after the cron job is run, 
which only happens at intervals.

3. Update the column before running any related queries

This requires I execute DELETE and UPDATE queries before all seat related 
queries. It’s hard to manage and seems to slow down all such queries.

4. pg_cron

My environment wouldn’t allow me to install 3rd-party extensions, but even if I 
could, it seems pg_cron run cron jobs sequentially. I’m not sure it works well 
when I need to add a cron job for each newly joined member.

—

I’m not aware any other solutions. But the problem seems banal, and I believe 
it has been solved for a long time. Would really appreciate it if someone could 
at least point me in the right direction.

Regards,
Glen