RPM dependency on pg_partman12 fails

2021-01-22 Thread Peter Krefting

Hi!

We are using the Partition Manager (pg_partman) in our software, and 
to make sure the correct version is installed, our RPM (CentOS 7) has 
declared a dependency on the pg_partman12 package.


However, the latest update to the package renames it without a 
Provides on the old name, so a "yum update" now fails with an error:


Error: Package: 
   Requires: pg_partman12
   Removing: pg_partman12-4.4.0-1.rhel7.x86_64 
(@/pg_partman12-4.4.0-1.rhel7.x86_64)
   pg_partman12 = 4.4.0-1.rhel7
   Obsoleted By: pg_partman_12-4.4.1-1.rhel7.x86_64 (pgdg12)
   Not found
   Available: pg_partman12-4.2.0-1.rhel7.1.x86_64 (pgdg12)
   pg_partman12 = 4.2.0-1.rhel7.1
   Available: pg_partman12-4.3.0-1.rhel7.x86_64 (pgdg12)
   pg_partman12 = 4.3.0-1.rhel7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Is this an error in the pg_partman package, or are we declaring the 
dependency incorrectly on our end?


--
\\// Peter - http://www.softwolves.pp.se/




Re: FDW connections

2021-01-22 Thread Laurenz Albe
On Fri, 2021-01-22 at 16:03 +1100, Steve Baldwin wrote:
> If I have made a query on a foreign table (using postgres_fdw),
>  it establishes a connection automatically. Is there any way to
>  disconnect that fdw connection without disconnecting the session
>  that instigated it?

No.

>From PostgreSQL v14 on, there is the "idle_session_timeout" that you
could set on the server to close such sessions.  postgresql_fdw will
silently re-establish such broken connections.  You could set this
parameter in the foreign server definition.

But this is a strange request: why would you want to close such
connections before the database session ends?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





RE: FDW connections

2021-01-22 Thread Hou, Zhijie
> > If I have made a query on a foreign table (using postgres_fdw),  it
> > establishes a connection automatically. Is there any way to
> > disconnect that fdw connection without disconnecting the session  that
> > instigated it?
> 
> No.
> 
> From PostgreSQL v14 on, there is the "idle_session_timeout" that you could
> set on the server to close such sessions.  postgresql_fdw will silently
> re-establish such broken connections.  You could set this parameter in the
> foreign server definition.
> 
> But this is a strange request: why would you want to close such connections
> before the database session ends?
> 

Hi

There are two new functions being reviewed called:

postgres_fdw_disconnect()
postgres_fdw_disconnect_all()

These function may solve your problem,
If you are interested in that, you can take a look at [1].

The functions have not been committed yet, it may can be used in PG14.

[1] 
https://www.postgresql.org/message-id/CALj2ACVcpU%3DwB7G%3DzT8msVHvPs0-y0BbviupiT%2Bf3--bGYaOMA%40mail.gmail.com


Best regards,
houzj



 





Re: Copy & Re-copy of DB

2021-01-22 Thread Benedict Holland
I mean... the best? You just laid out a bunch of steps to define a process.
I will tell you that this sounds like a bad process and that there are
easier ways to make changes like inside of a transaction or backing up a
database before making changes or creating a table replication job, making
changes to one server, and push it to the other one.

But you didn't want to do that. You wanted to create a single file for your
database, load it into a server, make changes, and copy them back. I would
say that this is an overly complicated and unnecessary process but if that
is how you want to make changes then what you described is the set of
steps.

Thanks,
Ben

On Fri, Jan 22, 2021, 2:35 AM Ken Tanzer  wrote:

>
>
> On Thu, Jan 21, 2021 at 11:12 PM sivapostg...@yahoo.com <
> sivapostg...@yahoo.com> wrote:
>
>> So the solution to the issue will be
>>
>> 1.  Backup a DB using PGDUMP from 1st server.
>> 2.  Restore the DB in 2nd server.
>> 3.  Make required changes in the 2nd server.
>> 4.  Backup that DB using PGDUMP from 2nd server.
>> 5.  Delete / Rename that DB in the 1st server
>> 6.  Restore that DB in the 1st server.
>> 7.  Work again in the 1st server.
>>
>> Is this the best way to carry out this process?
>>
>>
> Well I don't know if it's best or not.  But you said you wanted:
>
> to copy the database from one server to another, work in another server,
>> and re-copy it in the original server.
>>
>> (and additionally end up with the database having the same name on the
> original server.)
>
> So your steps do seem like they would be one way to accomplish that.  What
> counts as the "best" way would depend on what else is important to you
> about this process, i.e. what criteria would you use for best?
>
> Cheers,
> Ken
>
>
>
>
> --
> AGENCY Software
> A Free Software data system
> By and for non-profits
> *http://agency-software.org/ *
> *https://demo.agency-software.org/client
> *
> ken.tan...@agency-software.org
> (253) 245-3801
>
> Subscribe to the mailing list
>  to
> learn more about AGENCY or
> follow the discussion.
>


Re: Copy & Re-copy of DB

2021-01-22 Thread Rory Campbell-Lange
On 22/01/21, Benedict Holland (benedict.m.holl...@gmail.com) wrote:
> I mean... the best? You just laid out a bunch of steps to define a process.
> 
> > On Thu, Jan 21, 2021 at 11:12 PM sivapostg...@yahoo.com <
> > sivapostg...@yahoo.com> wrote:
> >
> >> So the solution to the issue will be
> >>
> >> 1.  Backup a DB using PGDUMP from 1st server.
> >> 2.  Restore the DB in 2nd server.
> >> 3.  Make required changes in the 2nd server.
> >> 4.  Backup that DB using PGDUMP from 2nd server.
> >> 5.  Delete / Rename that DB in the 1st server
> >> 6.  Restore that DB in the 1st server.
> >> 7.  Work again in the 1st server.
> >>
> >> Is this the best way to carry out this process?

Rather late to the party, and I expect this has been mentioned already,
but presumably changes to any database are either to do with the data or
to do with aspects such as the pl functions.

Data transformations can be tested in production and testing in
transactions, so that the logic of the transformation can be captured in
a set of SQL statements which can be applied to either environment
through a (possibly automated) revision control system.

Data insertions are much more conveniently only done on the production
database, as converging data between different databases can be tricky.
However if your data is conveniently added in bulk without the risk of
duplication, a revision control approach could also work.

Finally working on pl functions and similar changes we do out of
revision control. In other words, "if it works in testing we apply it to
production". Of the the three only this is idempotent.

In any event, perhaps the following could work?

1.  Backup the DB on the 1st server using pg_dump
2.  Restore the DB on the 2nd server
3.  Record changes on the 2nd server as revision controlled statements
4.  Replay changes on the 1st server using the revision controlled
statements

Regards
Rory




AW: ldap connection parameter lookup

2021-01-22 Thread Zwettler Markus (OIZ)
> -Ursprüngliche Nachricht-
> Von: Laurenz Albe 
> Gesendet: Freitag, 15. Januar 2021 17:21
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@postgresql.org
> Betreff: Re: ldap connection parameter lookup
> 
> On Fri, 2021-01-15 at 14:09 +, Zwettler Markus (OIZ) wrote:
> > I want to use ldap to lookup the connection parameters:
> > https://www.postgresql.org/docs/12/libpq-ldap.html
> >
> >
> > Do I have to create one static entry per database within pg_service.conf 
> > like:
> >
> > [mydatabase]
> >
> ldap://ldap.mycompany.com/dc=mycompany,dc=com?description?one?(cn=mydat
> abase)
> >
> >
> > or is there also some kind of generic variant like this (meaning lookup 
> > connection
> parameters for the database name I tell you somehow):
> >
> > [${PGDATABASE}]
> >
> ldap://ldap.mycompany.com/dc=mycompany,dc=com?description?one?(cn=${PGD
> ATABASE})
> 
> I proposed something like that a while ago:
> https://postgr.es/m/D960CB61B694CF459DCFB4B0128514C2F3442B%40exadv11
> .host.magwien.gv.at
> but it was rejected.
> 
> Perhaps you could come up with a better version.
> 
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com



I'm afraid not. My proposal is as close to yours.

Anyway. PostgreSQL needs some kind of generic central name resolution service.

It is not feasible to do static entries per database in a large environment 
with hundreds of clients.

It's also not feasible to have ordinary endusers have to handle this static 
entry by themselves.

So Tom, lets give it a try?

Regards,
Markus




Re: Copy & Re-copy of DB

2021-01-22 Thread Benedict Holland
No. Just no. I is fine to make stored procedure changes in a development
environment and deploy them as part of a release. Typically you would want
some sort of change tracking software like alembic or squitch or something
like that. Production databases typically contain a huge amount of data or
data that you really shouldn't share.

Part of the release is to deploy changes to production systems. Often you
will want to back up those systems before a release in case you have to
roll back or just make small changes that you can revert. I would say that
release procedures for database deployment is well beyond the scope of this
list and every company I have worked for has different procedures.

Basically, the OP wanted to replicate a process that sort of works in MsSQL
kind of, maybe. The set of steps outlined will dump all of the tables and
restore them. If this is their process, I highly question that process but
those steps are correct. I would point out that eventually that system will
break down, is highly dependant on individuals knowing  lot of steps,
possibly exposes data to people who shouldn't have it, is overly
complicated, probably isn't best practices for releases, and is error prone
(someone makes a change that no one else knows about and it breaks a
webpage).

So is this the best? In my opinion, probably not. Will it work in the way
that the OP wanted it to work? Yes. I simply wouldn't manage a process like
this but if that is the process that the OP is comfortable with and a lot
of people agreed to, it will work. Sometimes it is easier to simply
replicate the existing bad process that a team agrees to rather than making
a better process.

Thanks,
Ben

On Fri, Jan 22, 2021, 9:21 AM Rory Campbell-Lange 
wrote:

> On 22/01/21, Benedict Holland (benedict.m.holl...@gmail.com) wrote:
> > I mean... the best? You just laid out a bunch of steps to define a
> process.
> >
> > > On Thu, Jan 21, 2021 at 11:12 PM sivapostg...@yahoo.com <
> > > sivapostg...@yahoo.com> wrote:
> > >
> > >> So the solution to the issue will be
> > >>
> > >> 1.  Backup a DB using PGDUMP from 1st server.
> > >> 2.  Restore the DB in 2nd server.
> > >> 3.  Make required changes in the 2nd server.
> > >> 4.  Backup that DB using PGDUMP from 2nd server.
> > >> 5.  Delete / Rename that DB in the 1st server
> > >> 6.  Restore that DB in the 1st server.
> > >> 7.  Work again in the 1st server.
> > >>
> > >> Is this the best way to carry out this process?
>
> Rather late to the party, and I expect this has been mentioned already,
> but presumably changes to any database are either to do with the data or
> to do with aspects such as the pl functions.
>
> Data transformations can be tested in production and testing in
> transactions, so that the logic of the transformation can be captured in
> a set of SQL statements which can be applied to either environment
> through a (possibly automated) revision control system.
>
> Data insertions are much more conveniently only done on the production
> database, as converging data between different databases can be tricky.
> However if your data is conveniently added in bulk without the risk of
> duplication, a revision control approach could also work.
>
> Finally working on pl functions and similar changes we do out of
> revision control. In other words, "if it works in testing we apply it to
> production". Of the the three only this is idempotent.
>
> In any event, perhaps the following could work?
>
> 1.  Backup the DB on the 1st server using pg_dump
> 2.  Restore the DB on the 2nd server
> 3.  Record changes on the 2nd server as revision controlled statements
> 4.  Replay changes on the 1st server using the revision controlled
> statements
>
> Regards
> Rory
>


open service broker api for local PGDG Postgres

2021-01-22 Thread Zwettler Markus (OIZ)
Does anyone know if there is an open service broker api for a local PGDG 
Postgres installation?

I can only find an osb api for Crunchy Postgres pgo: 
https://github.com/CrunchyData/pgo-osb

-Markus




Re: Copy & Re-copy of DB

2021-01-22 Thread Rory Campbell-Lange
On 22/01/21, Benedict Holland (benedict.m.holl...@gmail.com) wrote:
> Sometimes it is easier to simply > replicate the existing bad process
> that a team agrees to rather than making > a better process.

As Alvar Aalto said in a lecture at MIT

It is not by temporary building that Parthenon comes on Acropolis.






Re: Customer unable to connect on port 5432, Postgres 10.7

2021-01-22 Thread Keith Christian
Failed to copy pgsql-general, post repeated below:

Adrian,

I used your suggestions, fixed the column order, and it worked the
first time for the customer this morning:

hosttestpgdbpgusereee.fff.ggg.1/24password

THANKS!




RES: Error while running restore

2021-01-22 Thread Márcio Antônio Sepp
Hi 

> > I’m getting this error message when try to restore a database :
> > ERROR:  function f_validanumero_cnpj_cpf(character) does not exist
> > LINE 1: SELECT (f_validanumero_cnpj_cpf(trim(cnpj_cpf)::char)=false)
> > ^
> > HINT:  No function matches the given name and argument types. You might
> need
> > to add explicit type casts.
> > QUERY:  SELECT (f_validanumero_cnpj_cpf(trim(cnpj_cpf)::char)=false)
> > CONTEXT:  função PL/pgSQL public.f_testacnpjcpf(character) linha 31 em IF
> 
> It looks like f_testacnpjcpf() is making unwarranted assumptions about what
> search_path it's invoked under.  You could try schema-qualifying the
> reference to f_validanumero_cnpj_cpf, or adding an explicit "SET
> search_path" clause to f_testacnpjcpf().

Thank you. Setting set search_path to f_testacnpjcpf() works fine. 

Thank you Tom! Thanks list! 






RES: Error while running restore [SOLVED]

2021-01-22 Thread Márcio Antônio Sepp
> > Hi all,
> >
> > I’m getting this error message when try to restore a database :
> >
> > ERROR:  function f_validanumero_cnpj_cpf(character) does not exist
> >
> > LINE 1: SELECT (f_validanumero_cnpj_cpf(trim(cnpj_cpf)::char)=false)
> >
> >      ^
> >
> > HINT:  No function matches the given name and argument types. You might
> > need to add explicit type casts.
> >
> > QUERY:  SELECT (f_validanumero_cnpj_cpf(trim(cnpj_cpf)::char)=false)
> >
> > CONTEXT:  função PL/pgSQL public.f_testacnpjcpf(character) linha 31 em
> IF
> >
> > COPY esp_altera_estoque, line 1, column nr_cnpjcpf: "24614011000120"
> >
> > The funny is that i’m restoring the database to the same Server.
> >
> > Steps:
> >
> > pg_dump /dbname/> /dumpfile/
> >
> > Create new database and restoring with:
> >
> > psql <|/dumpfile/|
> >
> > Witch making mistakes do am i making?
> >
> > In this case, the database is small and if I open the dumpfile with a
> > notepad and split in 2 files and run after other it works fine.
> 
> Split where?
Dabase is very small. I open the dump file and with Ctrl X and Ctrl V I
create 2 news files. Than run in order each file.

> Is this function being used in a TRIGGER on esp_altera_estoque?
yes

Thank You Adrian and list!






localhost ssl

2021-01-22 Thread Rob Sargent



I will need to enforce ssl/tls in my production environment so I thought 
I would try setting things up on localhost to see how that went.


Then I noticed that my successful connections from 
"/usr/lib/postgresql/12/bin/psql -U postgres -h localhost -P pager=off 
postgres" report:


  psql (12.5 (Ubuntu 12.5-0ubuntu0.20.04.1))
  SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, 
bits: 256, compression: off)

  Type "help" for help.

though my pg_hba.conf does not specify SSL at all

  # Database administrative login by Unix domain socket 



  local   all postgrespeer 




  # TYPE  DATABASEUSERADDRESS 
METHOD 



  # "local" is for Unix domain socket connections only 



  local   all all peer 



  # IPv4 local connections: 



  hostall all 127.0.0.1/32md5 



  hostall all 127.0.1.1/32md5 



  # IPv6 local connections: 



  hostall all ::1/128 md5 




So to the questions:
1. Am I already getting encrypted connections and if so, how?
2. In production I hope to name the role with each connection as I want 
the search_path set by the connecting role.  Will I need a cert per role 
with CN=?





Re: localhost ssl

2021-01-22 Thread Adrian Klaver

On 1/22/21 11:04 AM, Rob Sargent wrote:


I will need to enforce ssl/tls in my production environment so I thought 
I would try setting things up on localhost to see how that went.


Then I noticed that my successful connections from 
"/usr/lib/postgresql/12/bin/psql -U postgres -h localhost -P pager=off 
postgres" report:


   psql (12.5 (Ubuntu 12.5-0ubuntu0.20.04.1))
   SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, 
bits: 256, compression: off)

   Type "help" for help.

though my pg_hba.conf does not specify SSL at all


Yes it does(implied):

https://www.postgresql.org/docs/12/auth-pg-hba-conf.html

"host

This record matches connection attempts made using TCP/IP. host 
records match SSL or non-SSL connection attempts as well as GSSAPI 
encrypted or non-GSSAPI encrypted connection attempts."


Also I'm guessing you have ssl = on in postgresql.conf and server cert 
setup.


If you want to enforce SSL then:

"
hostssl

This record matches connection attempts made using TCP/IP, but only 
when the connection is made with SSL encryption.


To make use of this option the server must be built with SSL 
support. Furthermore, SSL must be enabled by setting the ssl 
configuration parameter (see Section 18.9 for more information). 
Otherwise, the hostssl record is ignored except for logging a warning 
that it cannot match any connections.

"

Read below for more information:

https://www.postgresql.org/docs/12/ssl-tcp.html




   # Database administrative login by Unix domain socket

   local   all postgres    peer


   # TYPE  DATABASE    USER    ADDRESS METHOD

   # "local" is for Unix domain socket connections only

   local   all all peer

   # IPv4 local connections:

   host    all all 127.0.0.1/32    md5

   host    all all 127.0.1.1/32    md5

   # IPv6 local connections:

   host    all all ::1/128 md5


So to the questions:
1. Am I already getting encrypted connections and if so, how?
2. In production I hope to name the role with each connection as I want 
the search_path set by the connecting role.  Will I need a cert per role 
with CN=?






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: localhost ssl

2021-01-22 Thread Rob Sargent




> Also I'm guessing you have ssl = on in postgresql.conf and server cert 
setup.


Sorry, here's a likely explaination from postgresql.conf

ssl = on
#ssl_ca_file = '' 



ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
#ssl_crl_file = '' 



ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'

I have no recollection of making those choices (or what I had for 
breakfast).




If you want to enforce SSL then:

"
hostssl

     This record matches connection attempts made using TCP/IP, but only 
when the connection is made with SSL encryption.


Do you have any thoughts on question #2?




Re: FDW connections

2021-01-22 Thread Steve Baldwin
Thanks guys. I realise it was an odd request. The scenario is I'm building
a mechanism for an application to operate in limited capacity using a
secondary database while the primary database is being upgraded. I'm using
postgres_fdw to sync changes between the primary and secondary databases.
The reason for the question was during my testing I was switching between
'modes' (we refer to them as online and offline), and during the secondary
database setup process, it renames a database if it exists. That was
failing due to an existing connection that ended up being from the primary
database during its 'sync-from-offline' process from the previous test. The
primary database connection still existed because it was made from a
connection pool. So, the bottom line is that this was a somewhat contrived
situation, and I was able to release the connection from the pool after
performing the fdw query.

 We're using AWS RDS, so we've had to implement our own 'zero-downtime'
functionality. RDS also means we're a bit behind version-wise. The latest
version we have available today is 12.5, so I imagine it will be quite a
while before PG14 is a possibility.

Thanks very much for your help.

Kind regards,

Steve

On Fri, Jan 22, 2021 at 7:32 PM Hou, Zhijie 
wrote:

> > > If I have made a query on a foreign table (using postgres_fdw),  it
> > > establishes a connection automatically. Is there any way to
> > > disconnect that fdw connection without disconnecting the session  that
> > > instigated it?
> >
> > No.
> >
> > From PostgreSQL v14 on, there is the "idle_session_timeout" that you
> could
> > set on the server to close such sessions.  postgresql_fdw will silently
> > re-establish such broken connections.  You could set this parameter in
> the
> > foreign server definition.
> >
> > But this is a strange request: why would you want to close such
> connections
> > before the database session ends?
> >
>
> Hi
>
> There are two new functions being reviewed called:
>
> postgres_fdw_disconnect()
> postgres_fdw_disconnect_all()
>
> These function may solve your problem,
> If you are interested in that, you can take a look at [1].
>
> The functions have not been committed yet, it may can be used in PG14.
>
> [1]
> https://www.postgresql.org/message-id/CALj2ACVcpU%3DwB7G%3DzT8msVHvPs0-y0BbviupiT%2Bf3--bGYaOMA%40mail.gmail.com
>
>
> Best regards,
> houzj
>
>
>
>
>
>
>
>


Re: localhost ssl

2021-01-22 Thread Adrian Klaver

On 1/22/21 11:49 AM, Rob Sargent wrote:



> Also I'm guessing you have ssl = on in postgresql.conf and server 
cert setup.


Sorry, here's a likely explaination from postgresql.conf

ssl = on
#ssl_ca_file = ''

ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
#ssl_crl_file = ''

ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'

I have no recollection of making those choices (or what I had for 
breakfast).




If you want to enforce SSL then:

"
hostssl

 This record matches connection attempts made using TCP/IP, but 
only when the connection is made with SSL encryption.


Do you have any thoughts on question #2?


No, as I really have no idea what:

"In production I hope to name the role with each connection as I want 
the search_path set by the connecting role. ..."


means?

I would point out this:

https://www.postgresql.org/docs/12/auth-cert.html

"User name mapping can be used to allow cn to be different from the 
database user name."


which leads to this:

https://www.postgresql.org/docs/12/auth-username-maps.html



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: localhost ssl

2021-01-22 Thread Rob Sargent




No, as I really have no idea what:

"In production I hope to name the role with each connection as I want 
the search_path set by the connecting role. ..."


means?
My apologies: I rely on the search_path being set according to the role 
(--user).


Perhaps what I was missing is that the connection string uses both 
username and certificate?




I would point out this:

https://www.postgresql.org/docs/12/auth-cert.html

"User name mapping can be used to allow cn to be different from the 
database user name."



Just prior to that quote is
"The cn (Common Name) attribute of the certificate will be compared to 
the requested database user name, and if they match the login will be 
allowed."

which leads to me to believe I would need a cert per role.


which leads to this:

https://www.postgresql.org/docs/12/auth-username-maps.html
I don't think the mapping tricks help me, but happy to be convinced 
otherwise.


I have specific roles accessing specific schemas via sql which is not 
schema qualified.






Re: localhost ssl

2021-01-22 Thread Adrian Klaver

On 1/22/21 1:11 PM, Rob Sargent wrote:





Just prior to that quote is
"The cn (Common Name) attribute of the certificate will be compared to 
the requested database user name, and if they match the login will be 
allowed."

which leads to me to believe I would need a cert per role.


which leads to this:

https://www.postgresql.org/docs/12/auth-username-maps.html
I don't think the mapping tricks help me, but happy to be convinced 
otherwise.


Check out this section:

https://www.postgresql.org/docs/12/ssl-tcp.html#SSL-CLIENT-CERTIFICATES

"...  the cn (Common Name) in the certificate matches the user name or 
an applicable mapping."


This section spells out what is needed for the various forms of client 
cert SSL authentication.




I have specific roles accessing specific schemas via sql which is not 
schema qualified.




I'm  assuming this is some sort of security. Just wondering if there is 
provision made for people who know how to do SET search_path or \dn or 
schema qualify objects?



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: localhost ssl

2021-01-22 Thread Rob Sargent




Check out this section:

https://www.postgresql.org/docs/12/ssl-tcp.html#SSL-CLIENT-CERTIFICATES

"...  the cn (Common Name) in the certificate matches the user name or 
an applicable mapping."


This section spells out what is needed for the various forms of client 
cert SSL authentication.




I have specific roles accessing specific schemas via sql which is not 
schema qualified.




I'm  assuming this is some sort of security. Just wondering if there is 
provision made for people who know how to do SET search_path or \dn or 
schema qualify objects?



Honest, I've been reading 18.9 but as you can see it uses CN for host 
and then 20.12 suggests using CN for role.


Yes, I'm confused.  As I said in reply to Jeff, I would rather not need 
to remember to set the search_path, which I can avoid if I login as "role".





Re: localhost ssl

2021-01-22 Thread Adrian Klaver

On 1/22/21 2:48 PM, Rob Sargent wrote:



Check out this section:

https://www.postgresql.org/docs/12/ssl-tcp.html#SSL-CLIENT-CERTIFICATES

"...  the cn (Common Name) in the certificate matches the user name or 
an applicable mapping."


This section spells out what is needed for the various forms of client 
cert SSL authentication.




I have specific roles accessing specific schemas via sql which is not 
schema qualified.




I'm  assuming this is some sort of security. Just wondering if there 
is provision made for people who know how to do SET search_path or \dn 
or schema qualify objects?



Honest, I've been reading 18.9 but as you can see it uses CN for host 
and then 20.12 suggests using CN for role.


Difference between server certificate and client certificate.

To get a handle on this is going to take an outline of what your 
authentication needs are?





Yes, I'm confused.  As I said in reply to Jeff, I would rather not need 
to remember to set the search_path, which I can avoid if I login as "role".


I have not seen that conversation and I do not see it in the archive 
either. Is that off-list, different thread, something else?




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: localhost ssl

2021-01-22 Thread Rob Sargent




On 1/22/21 3:54 PM, Adrian Klaver wrote:

On 1/22/21 2:48 PM, Rob Sargent wrote:


Honest, I've been reading 18.9 but as you can see it uses CN for host 
and then 20.12 suggests using CN for role.


Difference between server certificate and client certificate.

To get a handle on this is going to take an outline of what your 
authentication needs are?





Yes, I'm confused.  As I said in reply to Jeff, I would rather not 
need to remember to set the search_path, which I can avoid if I login 
as "role".


I have not seen that conversation and I do not see it in the archive 
either. Is that off-list, different thread, something else?




I missed reply-all on responding to Jeff.  Here is that for the list:

> This is what I use.  We have a user and schema per client in a 
multi->tenant database, as in user = bob, schema = bob, database = 
our_company

>
> alter user bob set search_path to bob,our_company_common,public;
>
> Not sure your use case is the same though...

Yes, my case is very similar.  I just want to avoid needing to remember 
to set the search_path every time.  Since I alter the role to have a 
specific search_path as follows, all I need to do is login as "bob":


create schema if not exists sgstemplate\p\g
create role sgstemplate with login encrypted password ''\p\g
alter role sgstemplate set search_path=sgstemplate,base,public\p\g
grant connect on database PROJDB to sgstemplate\p\g

"sgstemplate" gets reset by "sed s/sgstemplate/bob/g"

I only wish I could set a default database as well, but IMHO that's a 
failing of JDBC as much as postgres.