Re: create policy statement USING clause

2024-11-13 Thread Mark Phillips
Thank you for the reply. I appreciate it very much.

I checked the data for null in the column values, but I didn't any. I started 
over from the beginning with a fresh clone of the database, and followed the 
set up in ordered fashion, including a little routine to assure valid data in 
the column, and it now works fine. The better form of the USING clause 
certainly helped. I am happy to share my notes if someone would like to see 
them.

As for pg 12, an update to the current stable release is on the project roadmap.

Cheers,

 - Mark

> On Nov 12, 2024, at 12:48 AM, Laurenz Albe  wrote:
> 
> On Mon, 2024-11-11 at 15:10 -0800, Mark Phillips wrote:
>> PostgreSQL 12
> 
> Upgrade now!
> 
>> Given a table “customer” with a column “deadfiled” of the type boolean. The 
>> column
>> deadfiled is used to indicate that a row is “in the trash bin”. The app has 
>> a window
>> that lists the contents of the “trash bin”, which any rows with deadfiled = 
>> true.
>> Row so marked should be excluded from views and queries in all other cases 
>> when the
>> current user has the role “app_user".
>> 
>> I thought I could use row level security (RLS) to filter out all the 
>> deadfiled rows. 
>> 
>> ALTER TABLE customer ENABLE ROW LEVEL SECURITY;
>> CREATE POLICY filter_customer_deadfiled
>>  ON public.customer
>>  FOR SELECT
>>  TO app_staff
>>  USING ( NOT deadfiled );
>> 
>> However, that did not work as desired. I have read through a dozen articles 
>> and posts
>> online but haven’t figured out the USING clause. To my surprise, this worked:
>> CREATE POLICY customer_deadfiled
>> ON public.customer
>> AS PERMISSIVE
>> FOR SELECT
>> TO prm_staff
>> USING (coalesce(deadfiled,false)=false);
>> 
>> So my question is specifically about the USING clause, but also more broadly 
>> about
>> this attempted application of RLS.
> 
> It seems that your problem is that "deadfiled" is NULL in some rows, any you 
> want
> such rows to be considered live.
> 
> Since NOT NULL is not TRUE, you'd have to use a USING clause like
> 
>  USING (deadfiled IS NOT TRUE)
> 
> Yours,
> Laurenz Albe





Re: Fwd: A million users

2024-11-13 Thread Vijaykumar Jain
On Wed, Nov 13, 2024, 4:42 PM Achilleas Mantzios - cloud <
a.mantz...@cloud.gatewaynet.com> wrote:

>
> Exactly! In the later versions, security gets more and more refined and
> strengthened. So ppl should think about moving away from "public" , and
> start implementing finer grained schemes of security, as you suggest. +
> \dp shows prettier than having 1000+ users listed.
>

I wanted to just communicate the limits.
a lot of postgresql architecture can leverage the resources and scale, but
not all.
i had 100s of 1000s of tables on my setup where i worked last.
if i did \dt it would freeze all the time. i had to exit the pdwl session,
check the source code of how the partition was named and then look for what
I wanted.
if things are pretty with psql or not should not be a criteria for how many
objects you want to have.

i would expect clear exceptions  so one knows what the real problem is.
the error I got did not in anyway communicate the role limits for col size
limits.


Fwd: A million users

2024-11-13 Thread Vijaykumar Jain
-- Forwarded message -
From: Kaare Rasmussen <>


Hi

A simple question before design.

Would there be performance issues having perhaps a million users, each
having several roles? I could imagine a user would have on average 10-20
roles.

I tried to grant select permissions to 5000 different roles on one table,
It failed with row size too big already at 2443.

That said,
I am not even talking of catalog bloat.
So please test ...I think there might be some doc that will show the
limitations or in the source code, I can check later


Re: Fwd: Fwd: Postgres attach partition: AccessExclusive lock set on different tables depending on how attaching is performed

2024-11-13 Thread Alvaro Herrera
On 2024-Nov-10, Tom Lane wrote:

> This surprised me a bit too, because I thought we took a
> slightly-less-than-exclusive lock for FK additions or deletions.
> Tracing through it, I find that CloneFkReferencing opens the
> referenced relation with ShareRowExclusiveLock as I expected.
> But then we conclude that we can drop the existing FK enforcement
> triggers for the table being attached.  That causes us to take
> AccessExclusiveLock on the trigger itself, which is fine because
> nobody's really paying attention to that.  But then RemoveTriggerById
> takes AccessExclusiveLock on the trigger's table.  We already had
> that on the table being attached, but not on the other table.

Oooh.

> I wonder whether it'd be all right for RemoveTriggerById to take
> only ShareRowExclusiveLock on the trigger's table.  This seems
> OK in terms of basic semantics: that's enough to lock out
> anything that might want to fire triggers on the table.  However,
> this comment for AlterTableGetLockLevel gives me pause:
> 
>  * Also note that pg_dump uses only an AccessShareLock, meaning that anything
>  * that takes a lock less than AccessExclusiveLock can change object 
> definitions
>  * while pg_dump is running. Be careful to check that the appropriate data is
>  * derived by pg_dump using an MVCC snapshot, rather than syscache lookups,
>  * otherwise we might end up with an inconsistent dump that can't restore.
> 
> I think pg_dump uses pg_get_triggerdef, which is probably not
> safe in these terms.

Looking at pg_get_triggerdef_worker, it is not using syscache but a
systable scan, which uses the catalog snapshot.  A catalog snapshot is
indeed implemented as an MVCC snapshot (so strictly speaking it _is_ an
MVCC snapshot), but the invalidation rules are different from a normal
MVCC snapshot, so AFAIU it's still unsafe.

> An alternative answer might be what Alvaro was muttering about
> the other day: redesign FKs for partitioned tables so that we
> do not have to change the set of triggers when attaching/detaching.

Hmm, I hadn't thought about this idea in those terms, but perhaps we
could reimplement this by not having one trigger for each RI check, but
instead a single trigger which internally determines which FK
constraints exist on the table and does the necessary work in a single
pass.  Then we don't need to add/drop triggers all the time, but we just
add it with the first FK in the table, and remove it when dropping the
last FK.

For tables with many FKs, this could be a win, because we'd only go
through the trigger machinery once.  If a table has both outgoing and
incoming FKs, maybe we could have _one_ single trigger.

(I think this would be orthogonal with the project to stop using SPI for
RI triggers.)

-- 
Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/




Re: Fwd: A million users

2024-11-13 Thread Kaare Rasmussen
Hi Dominique 




Hi. Sure, that's a good point, and a best practice IMHO.
But I already do that, and the original question remain,
i.e. how does PostgreSQL with thousands or millions of roles?




Sorry if my original post was unclear, but I don't expect that there will be 
much more than perhaps a hundred roles. Each may have from a few up to a 
million users in them, though.



/kaare

Re: PostgreSQL logical replication

2024-11-13 Thread Jayadevan M
On Tue, Nov 12, 2024 at 7:47 PM Ron Johnson  wrote:

> The documentation tells you, down in the Notes section.
>
> https://www.postgresql.org/docs/16/sql-createpublication.html
>
Thank you.


DB wal file disabled --_Query

2024-11-13 Thread jayakumar s
Hi Team,

We have disabled walfile ion database level. While I do manual switch wal
file generating in wal file location.

Here is my question if we disable it. Walfile won't generate am i correct?

Kindly clarify my doubts.

postgres=# select name,setting from pg_settings where name like 'archive%';
  name   |  setting
-+
 archive_command | (disabled)
 archive_mode| off
 archive_timeout | 0
(3 rows)

postgres=#

postgres=# SELECT pg_switch_wal();
 pg_switch_wal
---
 D/9D00
(1 row)

[postgres@x pg_wal]$ ls -lrt
total 81920
drwx--. 2 postgres postgres6 Oct 13  2020 archive_status
-rw---. 1 postgres postgres 16777216 Nov 13 20:47
0001000D009F
-rw---. 1 postgres postgres 16777216 Nov 13 20:50
0001000D00A0
-rw---. 1 postgres postgres 16777216 Nov 13 20:51
0001000D00A1
-rw---. 1 postgres postgres 16777216 Nov 13 21:02
0001000D009D
-rw---. 1 postgres postgres 16777216 Nov 13 21:02
0001000D009E
[postgres@# pg_wal]$


Re: DB wal file disabled --_Query

2024-11-13 Thread Adrian Klaver

On 11/13/24 08:15, jayakumar s wrote:

Hi Team,

We have disabled walfile ion database level. While I do manual switch 
wal file generating in wal file location.


Here is my question if we disable it. Walfile won't generate am i correct?

Kindly clarify my doubts.

postgres=# select name,setting from pg_settings where name like 'archive%';
       name       |  setting
-+
  archive_command | (disabled)
  archive_mode    | off
  archive_timeout | 0
(3 rows)


The above is disabling the archiving of WAL files to another location. 
WAL will still be written to pg_wal.


What are you trying to achieve?

You can  specify individual tables be UNLOGGED as shown here:

https://www.postgresql.org/docs/current/sql-createtable.html

"UNLOGGED

If specified, the table is created as an unlogged table. Data 
written to unlogged tables is not written to the write-ahead log (see 
Chapter 28), which makes them considerably faster than ordinary tables. 
However, they are not crash-safe: an unlogged table is automatically 
truncated after a crash or unclean shutdown. The contents of an unlogged 
table are also not replicated to standby servers. Any indexes created on 
an unlogged table are automatically unlogged as well.


If this is specified, any sequences created together with the 
unlogged table (for identity or serial columns) are also created as 
unlogged.

"






postgres=#

postgres=# SELECT pg_switch_wal();
  pg_switch_wal
---
  D/9D00
(1 row)

[postgres@x pg_wal]$ ls -lrt
total 81920
drwx--. 2 postgres postgres        6 Oct 13  2020 archive_status
-rw---. 1 postgres postgres 16777216 Nov 13 20:47 
0001000D009F
-rw---. 1 postgres postgres 16777216 Nov 13 20:50 
0001000D00A0
-rw---. 1 postgres postgres 16777216 Nov 13 20:51 
0001000D00A1
-rw---. 1 postgres postgres 16777216 Nov 13 21:02 
0001000D009D
-rw---. 1 postgres postgres 16777216 Nov 13 21:02 
0001000D009E

[postgres@# pg_wal]$


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: DB wal file disabled --_Query

2024-11-13 Thread David G. Johnston
On Wed, Nov 13, 2024 at 9:16 AM jayakumar s 
wrote:

> postgres=# select name,setting from pg_settings where name like 'archive%';
>   name   |  setting
> -+
>  archive_command | (disabled)
>

You cannot disable WAL.  What is disabled here is copying the WAL that is
produced to an archive.  IOW, backing up WAL.

David J.


Re: Fwd: A million users

2024-11-13 Thread Dominique Devienne
On Wed, Nov 13, 2024 at 11:29 AM Alvaro Herrera  wrote:
> On 2024-Nov-13, Vijaykumar Jain wrote:
> > I tried to grant select permissions to 5000 different roles on one table,
> > It failed with row size too big already at 2443.
>
> But you can grant select to one "reader" role, and grant that one role
> to however many other roles you want.  This way you can have an
> arbitrary number of roles with indirect access to the table.  In
> real-world usage, this is more convenient that granting access to
> individual roles on individual tables; likely, you'll grant access to
> sets of tables/views/functions/etc rather than exactly one, and you can
> manage that more easily if you have one intermediate role to modify than
> if you have to mess with 5000 individual roles.

Hi. Sure, that's a good point, and a best practice IMHO.
But I already do that, and the original question remain,
i.e. how does PostgreSQL with thousands or millions of roles?
In my use case, 1000 LOGIN users, and 10'000 schema related ROLEs,
is possible, with can translate to millions of GRANTs.

It matters to me especially, since I'm using pg_has_role() in RLS predicates,
in addition to using ROLEs for access to schemas as usual (see above).
I'm not in a point to test that myself at this time, but if anyone looked into
how PostgreSQL scales with many roles (O(N)? O(logN)? O(NlogN)?),
I'd be interested in pointers to that research or those benchmarks.

Thanks, --DD




Re: Fwd: A million users

2024-11-13 Thread Alvaro Herrera
On 2024-Nov-13, Vijaykumar Jain wrote:

> I tried to grant select permissions to 5000 different roles on one table,
> It failed with row size too big already at 2443.

But you can grant select to one "reader" role, and grant that one role
to however many other roles you want.  This way you can have an
arbitrary number of roles with indirect access to the table.  In
real-world usage, this is more convenient that granting access to
individual roles on individual tables; likely, you'll grant access to
sets of tables/views/functions/etc rather than exactly one, and you can
manage that more easily if you have one intermediate role to modify than
if you have to mess with 5000 individual roles.

-- 
Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/




Re: Fwd: A million users

2024-11-13 Thread Achilleas Mantzios - cloud



On 11/13/24 12:29, Alvaro Herrera wrote:

On 2024-Nov-13, Vijaykumar Jain wrote:


I tried to grant select permissions to 5000 different roles on one table,
It failed with row size too big already at 2443.

But you can grant select to one "reader" role, and grant that one role
to however many other roles you want.  This way you can have an
arbitrary number of roles with indirect access to the table.  In
real-world usage, this is more convenient that granting access to
individual roles on individual tables; likely, you'll grant access to
sets of tables/views/functions/etc rather than exactly one, and you can
manage that more easily if you have one intermediate role to modify than
if you have to mess with 5000 individual roles.


Exactly! In the later versions, security gets more and more refined and 
strengthened. So ppl should think about moving away from "public" , and 
start implementing finer grained schemes of security, as you suggest. + 
\dp shows prettier than having 1000+ users listed.









Re: Fwd: A million users

2024-11-13 Thread Vijaykumar Jain
On Wed, Nov 13, 2024, 5:00 PM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

>
>
> On Wed, Nov 13, 2024, 4:42 PM Achilleas Mantzios - cloud <
> a.mantz...@cloud.gatewaynet.com> wrote:
>
>>
>> Exactly! In the later versions, security gets more and more refined and
>> strengthened. So ppl should think about moving away from "public" , and
>> start implementing finer grained schemes of security, as you suggest. +
>> \dp shows prettier than having 1000+ users listed.
>>
>
> I wanted to just communicate the limits.
> a lot of postgresql architecture can leverage the resources and scale, but
> not all.
> i had 100s of 1000s of tables on my setup where i worked last.
> if i did \dt it would freeze all the time. i had to exit the pdwl session,
> check the source code of how the partition was named and then look for what
> I wanted.
> if things are pretty with psql or not should not be a criteria for how
> many objects you want to have.
>
> i would expect clear exceptions  so one knows what the real problem is.
> the error I got did not in anyway communicate the role limits for col size
> limits.
>

https://fluca1978.github.io/2018/01/04/PostgreSQLUsers.html

so roles are not the problem.
but if you grant them individually select on the same table for ex. then
the limits are breached based of size of the col not number of permissions.

>
>


Re: Fwd: A million users

2024-11-13 Thread walther

Dominique Devienne:

Hi. Sure, that's a good point, and a best practice IMHO.
But I already do that, and the original question remain,
i.e. how does PostgreSQL with thousands or millions of roles?
In my use case, 1000 LOGIN users, and 10'000 schema related ROLEs,
is possible, with can translate to millions of GRANTs.

It matters to me especially, since I'm using pg_has_role() in RLS predicates,
in addition to using ROLEs for access to schemas as usual (see above).
I'm not in a point to test that myself at this time, but if anyone looked into
how PostgreSQL scales with many roles (O(N)? O(logN)? O(NlogN)?),
I'd be interested in pointers to that research or those benchmarks.


I don't have any benchmarks, but the following data point:

We use PostgREST [1] which connects to the database with one 
"authenticator" role. For each request it handles, it does a SET ROLE to 
a role defined in the Authorization Header (JWT).


Some numbers we are running with currently:
- 1 authenticator role
- ~ 127.000 user roles granted to "authenticator"
- ~ 14.000 "scope" roles granted to the user roles (tenants, groups, ..)
- ~ 15 "context" roles granted to user roles ("admin", "user", ...)
- ~ 50 "access" roles granted to context roles ("view_x", "do_y", ...)

Only the access roles have any direct privileges granted.

We currently have ~ 700 RLS policies defined. Those are created TO the 
context roles. The policies check the current role's scope roles to 
select "allowed" rows.


In total, we have ~370.000 roles granted to each other (pg_auth_members).

Except for one thing, we have never had any real problems with this. We 
didn't observe anything getting massively worse with many roles, even 
though we use them extensively. RLS policies need to be carefully 
written to get any performance, though.


The one problem we found is:

The first time the authenticator role does a SET ROLE in a session it's 
**terribly** slow. With fewer users back then it took 6-7 minutes to do 
it. Any SET ROLE afterwards in the same session would be fast. Even more 
annoying - killing the session with SET ROLE running would not work 
properly and leave zombie processes. Giving the authenticator role the 
SUPERUSER privilege avoids the problem and makes it instant. However.. 
that's not very desirable.


There were some improvements, IIRC in the 17 cycle (?), in that area, 
but I had not have the time to test it with that. We are still on v15 
and the last time I tested this was ~ two years ago. I still wasn't able 
to put together a simple reproducer either.


You should *probably* be better off with your different LOGIN roles, I 
assume the role cache builds up much quicker in that case.


Hope that helps.

Best,

Wolfgang

[1]: https://postgrest.org




Extract values from XML content

2024-11-13 Thread celati Laurent
Good afternoon,
I have a table 'metadata' with 2000 records. With one column 'id' and one
column 'data' with XML content.
I need to extract for all records the values regarding the Organisation
names.
I success in querying without error message thanks to this following sql
query :

SELECT id,
xpath('/contact/CI_ResponsibleParty/organisationName/CharacterString/text()',

CAST(data AS XML)) AS organisation_name
FROM public.metadata;

But the values don't appear into the column :

"id""organisation_name"
16410"[]"
16411"[]"
16412"[]"
16413"[]"
16414"[]"
16415"[]"
16416"[]"
16423"[]"
16425"[]"
16426"[]"
16427"[]"
16435"[]"
2250 "[]"
16587"[]"
16588"[]"

If needed, i paste below the FULL extract of the XLM content up to my
section of interest :

  

  
http://standards.iso.org/iso/19115/resources/Codelists/cat/codelists.xml#CI_RoleCode";
codeListValue="originator" />
  
  

  
Office français de la
biodiversité
  


Thanks so much.


work_mem RAM Accounting in PostgreSQL

2024-11-13 Thread Alexandru Lazarev
Hello Everyone,

I have some questions regarding how the "work_mem" parameter affects the
overall RAM usage of PostgreSQL processes within a physical host or
container.

Each backend process during SQL execution may allocate N * "work_mem"
simultaneously. For example, if "work_mem" is set to 32MB and N=5 (i.e. 5
simultaneous and/or sequential ORDER and hash operations), and the initial
RAM usage (RSS - Resident Set Size) of the backend is 10MB, I would expect
the backend process to use 160MB (32MB * 5) + 10MB, resulting in a total
RAM usage of 170MB.

My questions are as follows:

1. What happens to the allocated "work_mem" after the execution of query
nodes? Are these memory allocations freed?
2. If they are freed, do they remain in the RSS of the PostgreSQL backend?
3. From various sources, I understand that these allocations are freed
after each node execution due to memory contexts, but they might remain in
some sort of backend memory pool for future reuse. Is this correct?
4. If so, will this memory be accounted for as used RAM on my
Linux/Container system after the backend returns to an idle state (e.g.,
connection pooling)?

Additionally: If the above is true, and my PostgreSQL host or container is
limited to 16GB of RAM, what would happen if I have 100 pooled connections,
each gradually allocating those 160MB? Will this memory be reclaimed (if I
understood it correctly as a kind of inactive anon mem), or will the OOM
Killer be triggered at some point (because it is real allocated memory)?

Thank you for your insights.

Best regards,
AlexL
Java Dev


Help with restoring database from old version of PostgreSQL

2024-11-13 Thread Catherine Frock
I'm seeking help in restoring a database that was created with PostgreSQL
9.6. I backed it up successfully and have restored it before, but that was
when PostgreSQL 9.6 was still supported. After going through the
installation of PostgreSQL 16.3, I opened up pgAdmin 4 to try to verify the
PostgreSQL/PostGIS installation, but when I try to connect to the
PostgreSQL 16.3 server, I get an internal server error message:
'ServerManager'
object has no attribute 'user_info'.

In pgAdmin4, I am able to connect to the PostgreSQL 9.6 server, but when I
try to verify the install by querying: CREATE EXTENSION postgis;
SELECT postgis_version();
I get the following error message:  ERROR: could not open extension control
file "C:/Program Files/PostgreSQL/9.6/share/extension/postgis.control": No
such file or directory SQL state: 58P01

How can I restore my database to have access to it again?

Thanks.


Re: Extract values from XML content

2024-11-13 Thread David G. Johnston
On Wednesday, November 13, 2024, celati Laurent 
wrote:
>
>
> SELECT id, 
> xpath('/contact/CI_ResponsibleParty/organisationName/CharacterString/text()',
>
> CAST(data AS XML)) AS organisation_name 
>
> 
>   
>  codeList="http://standards.iso.org/iso/19115/resources/Codelists/cat/codelists.xml#CI_RoleCode";
>  codeListValue="originator" />
>   
>   
> 
>   
> Office français de la 
> biodiversité
>   
>
>
I’d look into how to specify namespaces since your XML elements all have
them but you don’t have any in your xpath.

David J.


Re: Fwd: A million users

2024-11-13 Thread Alvaro Herrera
On 2024-Nov-13, Kaare Rasmussen wrote:

> Sorry if my original post was unclear, but I don't expect that there
> will be much more than perhaps a hundred roles. Each may have from a
> few up to a million users in them, though.

In Postgres, a user is a role.  So if you have a hundred roles and a
million users that these roles are granted to, that means you'll have
100100 roles.  (In the worst case, where you grant all one hundred roles
to each of the million users, you would end up with 100_000_000 rows in
pg_auth_member).

I would expect such a system to work mostly fine.  It'll need memory for
the caches used to store contents of system catalogs.  I think you
should test it out and see what happens.  I haven't seen any published
_actual_ benchmarks on this point.

That said, having a million users is a bit strange.  Do you want to give
each visitor to your website a unique Postgres role, or something like
that?  I think this is unusual, but it should work.

-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/




Re: Extract values from XML content

2024-11-13 Thread shammat

Am 13.11.24 um 15:58 schrieb celati Laurent:

I have a table 'metadata' with 2000 records. With one column 'id' and one 
column 'data' with XML content.
I need to extract for all records the values regarding the Organisation names.
I success in querying without error message thanks to this following sql query :

SELECT id, 
xpath('/contact/CI_ResponsibleParty/organisationName/CharacterString/text()',
             CAST(data AS XML)) AS organisation_name
FROM public.metadata;



I typically find xmltable() a lot easier to work with, especially if the XML 
contains namespaces.

I guess the namespaces are the problem in your case, you will have to pass them 
to the xpath()
function and reference them in the path expression as well

So you will need something like:

xpath('//cit:CI_Responsibility/cit:party/cit:CI_Organisation/cit:name/gco:CharacterString/text()',
   cast(data as xml),
   ARRAY[ARRAY['cit', 'http://...'], array['gco', 'http://...']])

The actual value for the namespace URIs depends on the definition in your XML

Note that xpath() returns an array, so you probably want (xpath())[1]







Re: Help with restoring database from old version of PostgreSQL

2024-11-13 Thread Adrian Klaver

On 11/13/24 10:50, Catherine Frock wrote:
I'm seeking help in restoring a database that was created with 
PostgreSQL 9.6. I backed it up successfully and have restored it before, 
but that was when PostgreSQL 9.6 was still supported. After going 


Do you still have the Postgres 9.6 instance running?

through the installation of PostgreSQL 16.3, I opened up pgAdmin 4 to 


What version of pgAdmin4?

try to verify the PostgreSQL/PostGIS installation, but when I try to 
connect to the PostgreSQL 16.3 server, I get an internal server error 
message: 'ServerManager' object has no attribute 'user_info'.


In pgAdmin4, I am able to connect to the PostgreSQL 9.6 server, but when 


So the 9.6 instance is running.


I try to verify the install by querying: CREATE EXTENSION postgis;
SELECT postgis_version();
I get the following error message:  ERROR: could not open extension 
control file "C:/Program 
Files/PostgreSQL/9.6/share/extension/postgis.control": No such file or 
directory SQL state: 58P01


What OS and version are you running?

How did you install Postgres?




How can I restore my database to have access to it again?

Thanks.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Help with restoring database from old version of PostgreSQL

2024-11-13 Thread Ron Johnson
On Wed, Nov 13, 2024 at 1:50 PM Catherine Frock  wrote:

> I'm seeking help in restoring a database that was created with PostgreSQL
> 9.6. I backed it up successfully and have restored it before, but that was
> when PostgreSQL 9.6 was still supported. After going through the
> installation of PostgreSQL 16.3, I opened up pgAdmin 4 to try to verify the
> PostgreSQL/PostGIS installation, but when I try to connect to the
> PostgreSQL 16.3 server, I get an internal server error message:   
> 'ServerManager'
> object has no attribute 'user_info'.
>
> In pgAdmin4, I am able to connect to the PostgreSQL 9.6 server, but when I
> try to verify the install by querying: CREATE EXTENSION postgis;
> SELECT postgis_version();
> I get the following error message:  ERROR: could not open extension
> control file "C:/Program 
> Files/PostgreSQL/9.6/share/extension/postgis.control":
> No such file or directory SQL state: 58P01
>
> How can I restore my database to have access to it again?
>

Do you still have the backup file somewhere?  If so, then modern versions
of PG *will* read the 9.6 backup file.  You "just" need to have PostGIS
somewhere on disk that PG 16.3 knows about.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Row level security policy

2024-11-13 Thread Mark Phillips
Given a database table with one policy statement FOR SELECT applied, it is 
necessary to apply additional policy statements for insert, update, and delete 
operations?

My testing indicates that this is case but I haven’t found an explanation of 
this requirement in the documentation.

 - Mark




Re: Row level security policy

2024-11-13 Thread David G. Johnston
On Wednesday, November 13, 2024, Mark Phillips 
wrote:

> Given a database table with one policy statement FOR SELECT applied, it is
> necessary to apply additional policy statements for insert, update, and
> delete operations?
>

It isn’t necessary but most conventional use cases would involve
establishing policies for writing as well as reading.  But it is use case
dependent.

David J.


Re: Row level security policy

2024-11-13 Thread Mark Phillips
Thank you. I will revisit my test cases to be sure I have the use cases covered. - Mark, out and about.On Nov 13, 2024, at 5:36 PM, David G. Johnston  wrote:On Wednesday, November 13, 2024, Mark Phillips  wrote:Given a database table with one policy statement FOR SELECT applied, it is necessary to apply additional policy statements for insert, update, and delete operations?
It isn’t necessary but most conventional use cases would involve establishing policies for writing as well as reading.  But it is use case dependent.David J.


Re: Row level security policy

2024-11-13 Thread Mark Phillips
Well, things did not work as I expected, which means there is more for me to 
learn. I am new to RLS usage. I want to implement this in a proper manner, so 
is the behavior described below correct? Are there other aspects of this I need 
to study?

Thanks, in advance. Advice and links to articles are greatly appreciated.

 - Mark

Here are two tests I ran using pg 12 (upgrade on the schedule). 

Given a table “customer” with a column “deadfiled” with a default of false. If 
deadfile is true, then exclude row from queries executed by role “staff”.

Test 1
CREATE POLICY filter_customer_deadfiled
ON public.customer
AS PERMISSIVE
FOR SELECT
TO staff
USING ((deadfiled IS NOT TRUE));

Select queries by staff do not include row where deadfiled is true. Update and 
insert queries by staff on visible rows fail. 

Test 2
CREATE POLICY filter_customer_deadfiled
ON public.customer
AS PERMISSIVE
FOR ALL
TO prm_staff
USING ((deadfiled IS NOT TRUE));

Select queries by staff do not include row where deadfiled is true. Update 
insert queries by staff on visible rows succeed. 

This indicates that policy using FOR ALL allows CRUD, but if the policy states 
FOR SELECT then additional policies are needed for insert, update and delete.



> On Nov 13, 2024, at 6:13 PM, Mark Phillips  wrote:
> 
> Thank you. I will revisit my test cases to be sure I have the use cases 
> covered.
>  - Mark, out and about.
> 
>> On Nov 13, 2024, at 5:36 PM, David G. Johnston  
>> wrote:
>> 
>> On Wednesday, November 13, 2024, Mark Phillips > > wrote:
>>> Given a database table with one policy statement FOR SELECT applied, it is 
>>> necessary to apply additional policy statements for insert, update, and 
>>> delete operations?
>> 
>> It isn’t necessary but most conventional use cases would involve 
>> establishing policies for writing as well as reading.  But it is use case 
>> dependent.
>> 
>> David J.
>>