Re: Is _ a supported way to create a column of array type?

2019-04-26 Thread Piotr Findeisen
Hi Tom,

Thanks for your response.

I think I need to provide some context for my question.
I am maintaining Presto connector to Postgres (
https://github.com/prestosql/presto/tree/master/presto-postgresql).

When accessing a table in Postgres, we need to map columns' types to
appropriate types in Presto.
For mapping arrays, we need to know number of array dimensions.
Currently we read this from pg_attribute.attndims and this does not work
for _ columns.

1. is there a better way to get array dimensions for a column of array type?
2. is it possible to make pg_attribute.attndims have correct value when
column is defined using _ form?


Best,
Piotr




On Thu, Apr 25, 2019 at 11:03 PM Tom Lane  wrote:

> Piotr Findeisen  writes:
> > Internally, array types get a name in the form of `_`.
>
> Typically, yes.
>
> > *Can a user use `_` to define a column of array type?*
>
> Sure ... didn't you try it?
>
> > *Is it supported?*
>
> Not really, because it's not guaranteed that the name looks like that.
> There are various corner cases where something else would be generated,
> either to avoid a collision, or because truncation is needed.
>
> However, if you've taken the trouble to check what name actually got
> assigned to the array type, it's perfectly valid to use that name.
>
> > The reason I am asking is that e.g. int4[] and _int4 behave differently.
> > Although they look the same, the have different pg_attribute.attndims.
>
> Yeah.  Nothing really cares about attndims though ... it's vestigial.
> Perhaps we should remove it someday.
>
> regards, tom lane
>


TCP Resets when closing connection opened via SSL

2019-04-26 Thread Jānis Pūris
Hello,

I'm trying to do a simple health check for keepalived and other services via a 
python script and psycopg2 library. All seems to be all right, until I close 
the connection, at which point a packet with TCP reset is produced.
This has become very problematic and creates extensive noise in monitoring.
It also happens with IntelliJ DataGrip provided Java driver and PHP drivers. 
And hence I do not think this is the drivers problem, but PG instead.

Script
import psycopg2
import sys


def check():
   # Relying on .pgpass for password
   con = psycopg2.connect('user=monitoring dbname=postgres host=127.0.0.1')
   cur = con.cursor()
   cur.execute("select 'keepalived healthcheck'")
   cur.close()
   con.close()

try:
   check()
   print("ok")
except Exception as e:
   print("not ok")
   print(str(e))
   sys.exit(1)

tcpdump tcpdump -v 'tcp[tcpflags] & (tcp-rst) != 0' -ilo
16:27:45.307006 IP (tos 0x0, ttl 64, id 8123, offset 0, flags [DF], proto TCP 
(6), length 40)
   localhost.40797 > localhost.postgres: Flags [R], cksum 0x0cca (correct), seq 
3830516781, win 0, length 0
From PG side of things, all seems to be OK
2019-04-23 16:27:45.300 CEST process=15615 c= t=0 s=5cbf20e1.3cff 
[unknown]@127.0.0.1:[unknown] app=[unknown] LOG:  connection received: 
host=127.0.0.1 port=40797
2019-04-23 16:27:45.304 CEST process=15615 c=authentication t=0 s=5cbf20e1.3cff 
monitoring@127.0.0.1:postgres app=[unknown] LOG:  connection authorized: 
user=monitoring database=postgres SSL enabled (protocol=TLSv1.2, 
cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)
2019-04-23 16:27:45.306 CEST process=15615 c=BEGIN t=0 s=5cbf20e1.3cff 
monitoring@127.0.0.1:postgres app=[unknown] LOG:  duration: 0.095 ms
2019-04-23 16:27:45.306 CEST process=15615 c=SELECT t=0 s=5cbf20e1.3cff 
monitoring@127.0.0.1:postgres app=[unknown] LOG:  duration: 0.234 ms
2019-04-23 16:27:45.306 CEST process=15615 c=idle in transaction t=0 
s=5cbf20e1.3cff monitoring@127.0.0.1:postgres app=[unknown] LOG:  
disconnection: session time: 0:00:00.006 user=monitoring database=postgres 
host=127.0.0.1 port=40797
I've tried various combinations of OS, psycopg2 and PG versions to no avail. My 
focus has been to reproduce with this driver only.

• OS: Centos 7 and Ubuntu 18.04
• psycopg2: From latest down to 2.7.x
• PG: Latest 11 down to 9.4 (we use 9.4-bdr currently in production)

I can not reproduce this, when SSL has been disabled on PG. However this is not 
a solution, as we are committed to use SSL for all our connections to DB.

Has anyone experienced this and could share any advice how I could prevent this 
from producing network packets with TCP [R] flag ?

Thank you very much in advance and any advice will be much appreciated.

P.S. I've also posted this on following channels:

• https://dba.stackexchange.com/questions/235497/tcp-resets-when-using-psycopg2
• https://github.com/psycopg/psycopg2/issues/906


Med vennlig hilsen.
Best regards, Janis Puris.


Re: TCP Resets when closing connection opened via SSL

2019-04-26 Thread Jānis Pūris
Resending the original message without any formatting.

Hello,

I'm trying to do a simple health check for keepalived and other services via a 
python script and psycopg2 library. All seems to be all right, until I close 
the connection, at which point a packet with TCP reset is produced.

This has become very problematic and creates extensive noise in monitoring.

It also happens with IntelliJ DataGrip provided Java driver and PHP drivers. 
And hence I do not think this is the drivers problem, but PG instead.

Script

import psycopg2
import sys

def check():
   # Relying on .pgpass for password
   con = psycopg2.connect('user=monitoring dbname=postgres host=127.0.0.1')
   cur = con.cursor()
   cur.execute("select 'keepalived healthcheck'")
   cur.close()
   con.close()

try:
   check()
   print("ok")
except Exception as e:
   print("not ok")
   print(str(e))
   sys.exit(1)

tcpdump tcpdump -v 'tcp[tcpflags] & (tcp-rst) != 0' -ilo

16:27:45.307006 IP (tos 0x0, ttl 64, id 8123, offset 0, flags [DF], proto TCP 
(6), length 40)
   localhost.40797 > localhost.postgres: Flags [R], cksum 0x0cca (correct), seq 
3830516781, win 0, length 0

From PG side of things, all seems to be OK

2019-04-23 16:27:45.300 CEST process=15615 c= t=0 s=5cbf20e1.3cff 
[unknown]@127.0.0.1:[unknown] app=[unknown] LOG:  connection received: 
host=127.0.0.1 port=40797
2019-04-23 16:27:45.304 CEST process=15615 c=authentication t=0 s=5cbf20e1.3cff 
monitoring@127.0.0.1:postgres app=[unknown] LOG:  connection authorized: 
user=monitoring database=postgres SSL enabled (protocol=TLSv1.2, 
cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)
2019-04-23 16:27:45.306 CEST process=15615 c=BEGIN t=0 s=5cbf20e1.3cff 
monitoring@127.0.0.1:postgres app=[unknown] LOG:  duration: 0.095 ms
2019-04-23 16:27:45.306 CEST process=15615 c=SELECT t=0 s=5cbf20e1.3cff 
monitoring@127.0.0.1:postgres app=[unknown] LOG:  duration: 0.234 ms
2019-04-23 16:27:45.306 CEST process=15615 c=idle in transaction t=0 
s=5cbf20e1.3cff monitoring@127.0.0.1:postgres app=[unknown] LOG:  
disconnection: session time: 0:00:00.006 user=monitoring database=postgres 
host=127.0.0.1 port=40797

I've tried various combinations of OS, psycopg2 and PG versions to no avail. My 
focus has been to reproduce with this driver only.

• OS: Centos 7 and Ubuntu 18.04
• psycopg2: From latest down to 2.7.x
• PG: Latest 11 down to 9.4 (we use 9.4-bdr currently in production)

I can not reproduce this, when SSL has been disabled on PG. However this is not 
a solution, as we are committed to use SSL for all our connections to DB.

Has anyone experienced this and could share any advice how I could prevent this 
from producing network packets with TCP [R] flag ?

Thank you very much in advance and any advice will be much appreciated.

P.S. I've also posted this on following channels:

• https://dba.stackexchange.com/questions/235497/tcp-resets-when-using-psycopg2
• https://github.com/psycopg/psycopg2/issues/906

Med vennlig hilsen.
Best regards, Janis Puris.

Med vennlig hilsen.
Best regards, Janis Puris.
On 26 Apr 2019, 09:34 +0200, Jānis Pūris , wrote:
> Hello,
>
> I'm trying to do a simple health check for keepalived and other services via 
> a python script and psycopg2 library. All seems to be all right, until I 
> close the connection, at which point a packet with TCP reset is produced.
> This has become very problematic and creates extensive noise in monitoring.
> It also happens with IntelliJ DataGrip provided Java driver and PHP drivers. 
> And hence I do not think this is the drivers problem, but PG instead.
>
> Script
> import psycopg2
> import sys
>
>
> def check():
># Relying on .pgpass for password
>con = psycopg2.connect('user=monitoring dbname=postgres host=127.0.0.1')
>cur = con.cursor()
>cur.execute("select 'keepalived healthcheck'")
>cur.close()
>con.close()
>
> try:
>check()
>print("ok")
> except Exception as e:
>print("not ok")
>print(str(e))
>sys.exit(1)
>
> tcpdump tcpdump -v 'tcp[tcpflags] & (tcp-rst) != 0' -ilo
> 16:27:45.307006 IP (tos 0x0, ttl 64, id 8123, offset 0, flags [DF], proto TCP 
> (6), length 40)
>localhost.40797 > localhost.postgres: Flags [R], cksum 0x0cca (correct), 
> seq 3830516781, win 0, length 0
> From PG side of things, all seems to be OK
> 2019-04-23 16:27:45.300 CEST process=15615 c= t=0 s=5cbf20e1.3cff 
> [unknown]@127.0.0.1:[unknown] app=[unknown] LOG:  connection received: 
> host=127.0.0.1 port=40797
> 2019-04-23 16:27:45.304 CEST process=15615 c=authentication t=0 
> s=5cbf20e1.3cff monitoring@127.0.0.1:postgres app=[unknown] LOG:  connection 
> authorized: user=monitoring database=postgres SSL enabled (protocol=TLSv1.2, 
> cipher=ECDHE-RSA-AES256-GCM-SHA384, compression=off)
> 2019-04-23 16:27:45.306 CEST process=15615 c=BEGIN t=0 s=5cbf20e1.3cff 
> monitoring@127.0.0.1:postgres app=[unknown] LOG:  duration: 0.095 ms
> 2019-04-23 16:27:45.306 CEST process=15615 c=SELECT t=0 s=5cbf20e1.3cff 
> monito

Re: TCP Resets when closing connection opened via SSL

2019-04-26 Thread Rainer Pruy
On 26.04.19 09:41, Jānis Pūris wrote:
> /Resending the original message without any formatting./
>
> Hello,
>
> I'm trying to do a simple health check for keepalived and other
> services via a python script and psycopg2 library. All seems to be all
> right, until I close the connection, at which point a packet with TCP
> reset is produced.
>
> This has become very problematic and creates extensive noise in
> monitoring.
>
> It also happens with IntelliJ DataGrip provided Java driver and PHP
> drivers. And hence I do not think this is the drivers problem, but PG
> instead.


> From PG side of things, all seems to be OK
>
What actual problem you are experiencing?

Usually, getting a TCP reset is a normal event on TCP connections not
necessarily related to any application level activity.

A TCP reset is triggered as soon as a packet is being received on a
"closed" connection.

Such "late orphaned packet" may e.g. result from an ack for a packet in
transit by the time of close.


For more detailed information you might look at the exact sequence of
packets being exchanged.

This way, you might get better insight in what is happening and better
information on wether something is going wrong.


That you do experience TCP reset only with SSL being involved might just
be accidentally or be a

consequence of  SSL stack activities. It could even be caused by the way
the (SSL) connection

is being configured/set-up in the specific case.

But nevertheless, it might be perfectly OK for such packets to be generated.

So, please re-think why you are considering them problematic in the
first place.


Regards

Rainer



Re: Is _ a supported way to create a column of array type?

2019-04-26 Thread Piotr Findeisen
Hi Tom,

I think I understand now.

Postgres type system does not distinguish between array types with different
number of dimensions. int[], int[][] and int[][][][][][] are all equivalent
to the type system.

Number of dimensions is part of the value though and execution takes care
of it.
If I subscript an array with wrong "number of brackets" (as in
`select (array[1,2,3])[1][1][1]`) I get NULL.

Presto type system however distinguishes array(integer),
array(array(integer))...
(using Presto therms). And execution is (expectedly) not as flexible.

We can inspect number of brackets that were written in the table creation
command but that's inferring (and enforcing) strong typing for something
that is
not strongly typed. Thus, we can fail at execution.

Do you have any plans to support arrays with different number of dimensions
in the type system?

Best,
Piotr


Re: Is _ a supported way to create a column of array type?

2019-04-26 Thread Joe Conway
On 4/26/19 6:50 AM, Piotr Findeisen wrote:
> Presto type system however distinguishes array(integer),
> array(array(integer))...
> (using Presto therms). And execution is (expectedly) not as flexible.

Fine point here -- Postgres does not actually have arrays-of-arrays, it
has multi-dimension arrays.

So in other words, you cannot have an array of element type _int4, but
rather you literally have an array of element type int4 with two
dimension, etc..

Joe
-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: Inexplicable UPDATE...RETURNING behaviour

2019-04-26 Thread Joe Wildish
Hi Tom,

> On 16 Apr 2019, at 00:58, Tom Lane  wrote:
> 
> Joe Wildish  writes:
>> We are seeing an inexplicable behaviour when issuing an "UPDATE..RETURNING" 
>> statement. I am unsure if it is a Postgres bug. Additional eyes-on would be 
>> much appreicated.
> 
>> When issuing the following statement we are seeing multiple rows UPDATE'd 
>> despite the use of LIMIT 1 and despite the "uid" column in the "some_queue" 
>> table having a PRIMARY KEY constraint on it:
> 
>> UPDATE queue.some_queue AS q
>>   SET (state, awaiting) = ('executing', FALSE)
>>  FROM (SELECT uid
>>  FROM queue.some_queue
>> WHERE awaiting
>>   AND process_after <= CURRENT_TIMESTAMP AT TIME ZONE 'UTC'
>> ORDER BY process_after ASC
>>   FOR UPDATE SKIP LOCKED
>> LIMIT 1)
>>AS dq(uid)
>> WHERE q.uid = dq.uid
>> RETURNING q.uid;
> 
> Yeah, there was another similar complaint a few weeks ago --- has this
> suddenly gotten to be a popular coding idea?

I can't comment on that specifically. However, in my case, the reason this came 
up was simply that the original code was issuing a SELECT to grab a UID and 
then immediately issuing an UPDATE against the same UID. This isn't necessarily 
a problem, of course, given the semantics of FOR UPDATE SKIP LOCKED LIMIT 1. It 
just seemed neater to have a single statement handle the dequeue operation. An 
added bonus was that tracking statements via pg_stat_activity for queue-related 
operations became easier to comprehend, as the dequeue was no longer split 
between two statements.

> The basic problem with what you have here is that FOR UPDATE (especially
> with SKIP LOCKED) makes the sub-select's output unstable by definition.
> If it's executed more than once then you might get different rows back,
> allowing the outer UPDATE's join to potentially match multiple rows from
> the outer instance of queue.some_queue.  Typically, since it's LIMIT 1,
> I'd think that the planner would put dq on the outside of a nestloop plan
> and you'd escape seeing any problem --- but if it gets put on the inside
> of a nestloop, it's going to misbehave.
> ... 8< ...

Unfortunately I can't remember exactly what the plans were doing exactly --- we 
did take a look at EXPLAIN to see if we could figure out an explanation for the 
behaviour, but I foolishly didn't keep a copy of the outputs.

> The way I'd recommend fixing it is to put the FOR UPDATE into a WITH
> to guarantee single execution:
> 
> WITH dq(uid) AS (SELECT uid ... LIMIT 1)
> UPDATE queue.some_queue q SET ...
> FROM dq
> WHERE q.uid = dq.uid
> RETURNING q.uid;

Thanks. We are actually now running a version of the code now that does "UPDATE 
.. WHERE q.uid = (SELECT .. LIMIT 1) RETURNING q.uid" and the multiple dequeues 
have gone away. The subselect therefore appears to only being executed once in 
this scenario too.

Ironically my original version of this code used the WITH construct. I switched 
to using the subselect instead, purely to make it easier to write a query over 
pg_stat_activity that classified each statement on the basis of if they were 
INSERTing, UPDATEing or DELETEing from the queues :-)

Thanks for you help.

Cheers,
-Joe



9.6.9 Default configuration for a default installation but different with-krb-srvnam

2019-04-26 Thread Jean-Philippe Chenel
Hi,


Currently, when I make select version(), this version is installed.

PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1), compiled 
by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit


The database is running on the installation of Ubuntu 16.04 x64


I've configured the GSSAPI authentication with MS Active Directory and it works 
very well.


The problem is that we have a dev and prod environment and each server must be 
configured with gssapi again the domain controller. The default user is 
"postgres" and it cannot be bound to both postgresql server at the same time to 
the same userPrincipalName on the Windows domain controller.


1. So my question is, how can I compile this version of postgresql (9.6.9) and 
have the same real things of the default previously installed version, but with 
the --with-krb-srvnam=POSTGRES_DEV to change the default user name? So one 
server will have the postgres user and the other one will have postgres_dev 
user.

2. I'm planning to shut down the postgresql service and then replace files 
generated by the make install command. After restarting the service, all will 
work  as expected with the same database objects that were running before?

3. Maybe can we configure things differently, if something else can be done to 
make this work, I'm open to suggestions.


Currently these packages are installed from apt-get install, nothing special 
except gssapi authentication.


postgresql-9.6/now 9.6.9-2.pgdg16.04+1 amd64
postgresql-9.6-postgis-2.4/now 2.4.4+dfsg-1.pgdg16.04+1 amd64
postgresql-9.6-postgis-2.4-scripts/now 2.4.4+dfsg-1.pgdg16.04+1 all
postgresql-client-9.6/now 9.6.9-2.pgdg16.04+1 amd64
postgresql-client-common/now 191.pgdg16.04+1 all
postgresql-common/now 191.pgdg16.04+1 all
postgresql-contrib-9.6/now 9.6.9-2.pgdg16.04+1 amd64



With best regards,


Re: 9.6.9 Default configuration for a default installation but different with-krb-srvnam

2019-04-26 Thread Laurenz Albe
On Fri, 2019-04-26 at 13:18 +, Jean-Philippe Chenel wrote:
> 1. So my question is, how can I compile this version of postgresql (9.6.9) 
> and have the same real things
> of the default previously installed version, but with the 
> --with-krb-srvnam=POSTGRES_DEV to change the
> default user name? So one server will have the postgres user and the other 
> one will have postgres_dev user.

You can run

  pg_config --configure

to see how PostgreSQL was configured.

> 2. I'm planning to shut down the postgresql service and then replace files 
> generated by the make install
> command. After restarting the service, all will work  as expected with the 
> same database objects that were running before?

If you use the same major release and configured it identically on the same 
architecture, yes.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Is _ a supported way to create a column of array type?

2019-04-26 Thread Tom Lane
Piotr Findeisen  writes:
> Do you have any plans to support arrays with different number of dimensions
> in the type system?

Mmm ... don't hold your breath.  People have speculated about that,
but I haven't seen any actual proposals, and it's hard to see how
we could do it without creating compatibility problems that would
outweigh the value of the feature.

In very late-model Postgres (I think just 11 and up) you can sort
of fake it by using arrays of domains:

regression=# create domain intarray as int4[];
CREATE DOMAIN
regression=# create table foo (f1 intarray[]);
CREATE TABLE
regression=# insert into foo values(array[array[4]]);
ERROR:  column "f1" is of type intarray[] but expression is of type integer[]
LINE 1: insert into foo values(array[array[4]]);
   ^
HINT:  You will need to rewrite or cast the expression.
regression=# insert into foo values(array[array[4]::intarray]);
INSERT 0 1

But as this example shows, it's not exactly a transparent solution.
It might be possible to make this specific case work better, but
I think you'd inevitably end up needing lots of explicit casts.

regards, tom lane




Re: TCP Resets when closing connection opened via SSL

2019-04-26 Thread Tom Lane
=?utf-8?Q?J=C4=81nis_P=C5=ABris?=  writes:
> I'm trying to do a simple health check for keepalived and other services via 
> a python script and psycopg2 library. All seems to be all right, until I 
> close the connection, at which point a packet with TCP reset is produced.

Hm.  It's fairly obvious from the postmaster log that the client side
is not bothering to close the transaction it started:

> 2019-04-23 16:27:45.306 CEST process=15615 c=BEGIN t=0 s=5cbf20e1.3cff 
> monitoring@127.0.0.1:postgres app=[unknown] LOG:  duration: 0.095 ms
> 2019-04-23 16:27:45.306 CEST process=15615 c=SELECT t=0 s=5cbf20e1.3cff 
> monitoring@127.0.0.1:postgres app=[unknown] LOG:  duration: 0.234 ms
> 2019-04-23 16:27:45.306 CEST process=15615 c=idle in transaction t=0 
> s=5cbf20e1.3cff monitoring@127.0.0.1:postgres app=[unknown] LOG:  
> disconnection: session time: 0:00:00.006 user=monitoring database=postgres 
> host=127.0.0.1 port=40797

There's no COMMIT or ROLLBACK to go with the BEGIN, and that's reflected
in the fact that the disconnection message shows c=idle in transaction.
Now, I doubt that would have any impact on the TCP-level session behavior,
but it suggests that maybe the client isn't bothering to close the session
cleanly either.  The RST rather than FIN would then likely be caused by
SSL having to do an unclean shutdown.

regards, tom lane




Re: TCP Resets when closing connection opened via SSL

2019-04-26 Thread Jānis Pūris
Thanks for the insight, Tom.

> It's fairly obvious from the postmaster log that the client side
> is not bothering to close the transaction it started

Thats what I was also thinking, but I've managed to reproduce it with 
autocommit or commit before closing connection as well.
This is only reproducible with SSL connection.

I've reproduced it with ubuntu package bundles certs, my own created, generated 
with CA etc hence I do not think it is a problem with certificates themselves.
This happens with various clients, php driver, java driver and also libpq used 
by psycopg2. We are running a lot of different kind of microservices and thus 
far I can tell, almost (if not all) of them have this problem manifest.

I can not however reproduce this when doing something like
$ psql "user=postgres sslmode=require host=localhost dbname=postgres"
postgres=# \q

> So, please re-think why you are considering them problematic in the first 
> place.

This is an issue because we have a lot of connections being initiated and 
closed. And this creates a lot of TCP resets.

We could go with something like pgbouncer in front of it to pool the 
connections in order to reduce the reset count, but this would not be solving 
the root cause, but merely working around it.

Other than that, this does not seem to "break" anything. This is basically 
large noise for monitoring and huge numbers of resets, when looking at 
something like netstat. And by no means, this happens casually.

Has no one expieriencing this ?
Should I just assume this is "normal" and forget about it ? I'd still want to 
understand what's going on thought.

Med vennlig hilsen.
Best regards, Janis Puris.
On 26 Apr 2019, 16:17 +0200, Tom Lane , wrote:
> =?utf-8?Q?J=C4=81nis_P=C5=ABris?=  writes:
> > I'm trying to do a simple health check for keepalived and other services 
> > via a python script and psycopg2 library. All seems to be all right, until 
> > I close the connection, at which point a packet with TCP reset is produced.
>
> Hm. It's fairly obvious from the postmaster log that the client side
> is not bothering to close the transaction it started:
>
> > 2019-04-23 16:27:45.306 CEST process=15615 c=BEGIN t=0 s=5cbf20e1.3cff 
> > monitoring@127.0.0.1:postgres app=[unknown] LOG: duration: 0.095 ms
> > 2019-04-23 16:27:45.306 CEST process=15615 c=SELECT t=0 s=5cbf20e1.3cff 
> > monitoring@127.0.0.1:postgres app=[unknown] LOG: duration: 0.234 ms
> > 2019-04-23 16:27:45.306 CEST process=15615 c=idle in transaction t=0 
> > s=5cbf20e1.3cff monitoring@127.0.0.1:postgres app=[unknown] LOG: 
> > disconnection: session time: 0:00:00.006 user=monitoring database=postgres 
> > host=127.0.0.1 port=40797
>
> There's no COMMIT or ROLLBACK to go with the BEGIN, and that's reflected
> in the fact that the disconnection message shows c=idle in transaction.
> Now, I doubt that would have any impact on the TCP-level session behavior,
> but it suggests that maybe the client isn't bothering to close the session
> cleanly either. The RST rather than FIN would then likely be caused by
> SSL having to do an unclean shutdown.
>
> regards, tom lane


Re: TCP Resets when closing connection opened via SSL

2019-04-26 Thread Tom Lane
=?utf-8?Q?J=C4=81nis_P=C5=ABris?=  writes:
> This is only reproducible with SSL connection.

Seems like you might get more insight from OpenSSL experts, then.
(No idea where you find some, but they must exist.)

regards, tom lane




Re: TCP Resets when closing connection opened via SSL

2019-04-26 Thread Jeremy Harris
On 26/04/2019 18:19, Tom Lane wrote:
> =?utf-8?Q?J=C4=81nis_P=C5=ABris?=  writes:
>> This is only reproducible with SSL connection.
> 
> Seems like you might get more insight from OpenSSL experts, then.
> (No idea where you find some, but they must exist.)

List-Id: 

-- 
Cheers,
  Jeremy




Computed index on transformation of jsonb key set

2019-04-26 Thread Steven Schlansker
Hi Postgres fans,

We store a Java Map in Postgres as a jsonb column.
As json does not have a UUID type, it is of course stored as text.

A simple value might be:
{"04e623c0-6940-542f-a0de-4c999c626dfe": 5000, 
"6d3e24b6-9e8c-5eb1-9e4c-f32cc40864c9": 20825}

I am trying to implement an efficient set-intersection test.
"Find all rows where the map has a key in (uuid1, uuid2, uuid3)"

The most straightforward approach is to create a GIN over the whole column.
This has the documented disadvantage that all keys and values are stored in the 
GIN repeatedly, with the kicker that the UUID will be stored as text.

I figured I'd end up with significantly better storage and performance 
characteristics if I first compute a uuid[] value and build the GIN over that, 
and use the array operator class instead.  Additionally, this eliminates 
possible confusion about uuid casing (text is case sensitive, uuid is not) and 
this has already caused at least one bug in our application.

I attempted to optimize a query like:
select * from tbl where array(select jsonb_object_keys(mapData)::uuid) && 
array['320982a7-cfaa-572a-b5ea-2074d7f3b014'::uuid];

with:
create index my_idx on tbl using gin((array(select 
jsonb_object_keys(mapData)::uuid)));

Of course, this fails, ERROR: cannot use subquery in index expression
Clever me says, "well, I'll just use array_agg instead!"

create index my_idx on tbl using 
gin((array_agg(jsonb_object_keys(mapData)::uuid)));

Postgres 9.6 says:
ERROR:  aggregate functions are not allowed in index expressions

Just to see what happened, knowing that GIN supports multi-valued data, I tried:

create index my_idx on tbl using gin((jsonb_object_keys(mapData)::uuid));
ERROR:  index expression cannot return a set

How can I efficiently implement the feature I've described?  It seems difficult 
to use computed indexing with GIN.

Thank you for any advice and have a great weekend,
Steven





Re: Computed index on transformation of jsonb key set

2019-04-26 Thread David G. Johnston
On Fri, Apr 26, 2019 at 2:25 PM Steven Schlansker <
stevenschlans...@gmail.com> wrote:

> How can I efficiently implement the feature I've described?  It seems
> difficult to use computed indexing with GIN.
>

Don't use a computed index?  Add a trigger to populate a physical column
and index that.  Modify queries to make simple tests against the stored
array.

David J.


Re: Computed index on transformation of jsonb key set

2019-04-26 Thread Rob Sargent



On 4/26/19 3:25 PM, Steven Schlansker wrote:

Hi Postgres fans,

We store a Java Map in Postgres as a jsonb column.
As json does not have a UUID type, it is of course stored as text.

A simple value might be:
{"04e623c0-6940-542f-a0de-4c999c626dfe": 5000, 
"6d3e24b6-9e8c-5eb1-9e4c-f32cc40864c9": 20825}

I am trying to implement an efficient set-intersection test.
"Find all rows where the map has a key in (uuid1, uuid2, uuid3)"

The most straightforward approach is to create a GIN over the whole column.
This has the documented disadvantage that all keys and values are stored in the 
GIN repeatedly, with the kicker that the UUID will be stored as text.

I figured I'd end up with significantly better storage and performance 
characteristics if I first compute a uuid[] value and build the GIN over that, 
and use the array operator class instead.  Additionally, this eliminates 
possible confusion about uuid casing (text is case sensitive, uuid is not) and 
this has already caused at least one bug in our application.

I attempted to optimize a query like:
select * from tbl where array(select jsonb_object_keys(mapData)::uuid) && 
array['320982a7-cfaa-572a-b5ea-2074d7f3b014'::uuid];

with:
create index my_idx on tbl using gin((array(select 
jsonb_object_keys(mapData)::uuid)));

Of course, this fails, ERROR: cannot use subquery in index expression
Clever me says, "well, I'll just use array_agg instead!"

create index my_idx on tbl using 
gin((array_agg(jsonb_object_keys(mapData)::uuid)));

Postgres 9.6 says:
ERROR:  aggregate functions are not allowed in index expressions

Just to see what happened, knowing that GIN supports multi-valued data, I tried:

create index my_idx on tbl using gin((jsonb_object_keys(mapData)::uuid));
ERROR:  index expression cannot return a set

How can I efficiently implement the feature I've described?  It seems difficult 
to use computed indexing with GIN.

Thank you for any advice and have a great weekend,
Steven



Storing the map in a child table (parentId, UUID, int) is out of the 
question?





Re: Computed index on transformation of jsonb key set

2019-04-26 Thread Steven Schlansker



> On Apr 26, 2019, at 2:33 PM, Rob Sargent  wrote:
> 
> 
> On 4/26/19 3:25 PM, Steven Schlansker wrote:
>> 
>> 
>> How can I efficiently implement the feature I've described?  It seems 
>> difficult to use computed indexing with GIN.
>> 
> Storing the map in a child table (parentId, UUID, int) is out of the question?

Yes, thanks Rob and David, that will be my approach if I can't figure out 
anything better.

I was hoping to not have to denormalize the data given that Postgres generally 
has great support for computing indexes,
both to avoid storage costs and eliminate the need to maintain triggers or 
application code to handle it.





Re: Computed index on transformation of jsonb key set

2019-04-26 Thread Rob Sargent



On 4/26/19 3:53 PM, Steven Schlansker wrote:



On Apr 26, 2019, at 2:33 PM, Rob Sargent  wrote:


On 4/26/19 3:25 PM, Steven Schlansker wrote:


How can I efficiently implement the feature I've described?  It seems difficult 
to use computed indexing with GIN.


Storing the map in a child table (parentId, UUID, int) is out of the question?

Yes, thanks Rob and David, that will be my approach if I can't figure out 
anything better.

I was hoping to not have to denormalize the data given that Postgres generally 
has great support for computing indexes,
both to avoid storage costs and eliminate the need to maintain triggers or 
application code to handle it.



Is that json structure really more compact than the matching table (even 
with the additional parent id)?





Re: 9.6.9 Default configuration for a default installation but different with-krb-srvnam

2019-04-26 Thread Laurenz Albe
On Fri, 2019-04-26 at 20:25 +, Jean-Philippe Chenel wrote:
> > > 1. So my question is, how can I compile this version of postgresql 
> > > (9.6.9) and have the same real things
> > > of the default previously installed version, but with the 
> > > --with-krb-srvnam=POSTGRES_DEV to change the
> > > default user name? So one server will have the postgres user and the 
> > > other one will have postgres_dev user.
> > 
> > You can run
> > 
> >   pg_config --configure
> > 
> > to see how PostgreSQL was configured.
>
> Do you know if it possible to run a query or something else because pg_config 
> isn’t in the binaries.
> I’ve saw it but is was of version 10 or 11, not the 9.6

You have to install the package that contains the headers and development tools 
doe that.
Usually it is called *-dev or *-devel.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com