Re: pgbackrest Help Required

2022-09-27 Thread Stephen Frost
Greetings,

* Inzamam Shafiq (inzamam.sha...@hotmail.com) wrote:
> I am in process of configuring pgbackrest, I have followed documents and 
> configured backup from backup server. I have a setup of Primary and Hot 
> Standby, when I configured pgbackrest the standby DB got un-synced and now I 
> am unable to recover it, can anyone help or share a good document where we 
> can implement pgbackrest where streaming replication is already in place 
> using traditional pg_basebackup method?
> 
> Note: I want to configure pgbackrest and to take backup from standby DB, 
> however streaming replication is already there, where the archive and restore 
> commands are configured using traditional zip method.

Not sure what 'traditional zip method' means here, but if you've copied
the shell script out of the documentation, that's not a safe
configuration.

In configuration of pgbackrest, simply configure each of the PG hosts
and the repo and then set backup-standby=y.  You'll need to make sure
that the archive_command is set to pgbackrest, of course.  Having an
existing streaming replication setup is perfectly fine and pgbackrest
won't interfere with that.  You will want to set your restore command
to pgbackrest also though, in case streaming fails.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Findout long unused tables in database

2022-09-27 Thread Laurenz Albe
On Tue, 2022-09-27 at 08:35 +0200, Andreas Kretschmer wrote:
> 
> 
> Am 26.09.22 um 14:05 schrieb Andreas Fröde:
> > Hello,
> > I am looking for a way to find out when a table was last used for 
> > reading. (Without writing every statement in the logfile or putting a 
> > trigger on it). Is there such a thing?
> > 
> 
> no really what you are looking for, i know, but we have 
> pg_stat_user_tables. There can you find how often the table was queried 
> in the past. Take the data, wait some time, take it again and compare.

Ah, that is the best solution.  I should have thought of that.

Yours,
Laurenz Albe




Re: Findout long unused tables in database

2022-09-27 Thread Andreas Fröde

Hi Andreas,


no really what you are looking for, i know, but we have 
pg_stat_user_tables. There can you find how often the table was queried 
in the past. Take the data, wait some time, take it again and compare.


Thanks for this idea. i will try it out.

Andreas




Re: Order by in a sub query when aggregating the main query

2022-09-27 Thread Federico
I've changed the code to use order by in the aggregate and it seems
there are no noticeable changes in the query performance.
Thanks for the help.

Best,
Federico Caselli

On Sun, 25 Sept 2022 at 00:30, Federico  wrote:
>
> Understood, thanks for the explanation.
> I'll work on updating the queries used by sqlalchemy to do array_agg(x
> order by x) instead of the order by in the subquery.
>
> > I think that right now that'd
> > incur additional sorting overhead, which is annoying.  But work is
> > ongoing to recognize when the input is already correctly sorted
> > for an aggregate, so it should get better in PG 16 or so.
>
> Nice to know, hopefully it's too bad for this use case
>
> Thanks, Federico Caselli
>
> On Sun, 25 Sept 2022 at 00:20, Tom Lane  wrote:
> >
> > Federico  writes:
> > > A basic example of the type of query in question is the following (see
> > > below for the actual query):
> >
> > >   select w, array_agg(x)
> > >   from (
> > > select v, v / 10 as w
> > > from pg_catalog.generate_series(25, 0, -1) as t(v)
> > > order by v
> > >   ) as t(x)
> > >   group by w
> >
> > > This query will return an ordered array as specified by the order by
> > > clause.in the subquery.
> > > Can this behaviour be relied upon?
> >
> > No, not really.  It might always work given a particular set of
> > circumstances.  As long as the planner chooses to do the outer
> > query's grouped aggregation as a HashAgg, there'd be no reason
> > for it to reshuffle the subquery output before feeding that to
> > array_agg.  However, if it decided that sort-group-and-aggregate
> > was better, it'd insert a sort by w above the subquery, and then
> > you'd lose any certainty of the ordering by v continuing to hold.
> > (Maybe the sort by w would be stable for equal keys, but that's
> > not guaranteed.)
> >
> > What you really ought to do is write
> >
> >   select w, array_agg(x order by x)
> >   from ...
> >
> > to be in the clear per SQL standard.  I think that right now that'd
> > incur additional sorting overhead, which is annoying.  But work is
> > ongoing to recognize when the input is already correctly sorted
> > for an aggregate, so it should get better in PG 16 or so.
> >
> > regards, tom lane




Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Peter J. Holzer
On 2022-09-27 14:58:58 +0800, Julien Rouhaud wrote:
> On Mon, Sep 26, 2022 at 11:18:34AM -0700, Bryn Llewellyn wrote:
> > My demo seems to show that when a program connects as "client", it can
> > perform exactly and only the database operations that the database design
> > specified.
> >
> > Am I missing something? In other words, can anybody show me a vulnerability?
> 
> What exactly prevents the client role from inserting e.g.
> 
> - 'robert''); drop table students; --'

It can do this but it won't do any harm since the client role doesn't
have permission to drop the table-

> - millions of 'cat' rows
> - millions of 1GB-large rows

That depends on "the database operations that the database design
specified", but if the client role is supposed to be able to insert
data, you can't really prevent it from inserting non-sensical or
enormous data. You can encapsulate the insert functionality in a
function or procedure and do some sanity checks there. But automatically
distinguishing between legitimate use and abuse is generally not simple.

> or just keep sending massive invalid query texts to fill the logs, or just
> trying to connect until there's no available connection slots anymore, and 
> then
> keep spamming the server thousands of time per second to try to open new
> connections, or ...?

There are often several layers of defense. The database frequently won't
be accessible from the open internet (or even the company network)
directly. Only a middle tier of application servers running vetted
client code will connect directly. Even those servers may not be
accessible directly to end users. There may be a layer of proxy servers
above them. Each of these layers may implement additional checks, rate
limits and monitoring.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Bryn Llewellyn
> rjuju...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> My demo seems to show that when a program connects as "client", it can 
>> perform exactly and only the database operations that the database design 
>> specified. Am I missing something? In other words, can anybody show me a 
>> vulnerability?
> 
> What exactly prevents the client role from inserting e.g.
> 
> - 'robert''); drop table students; --'
> - millions of 'cat' rows
> - millions of 1GB-large rows
> 
> or just keep sending massive invalid query texts to fill the logs, or just 
> trying to connect until there's no available connection slots anymore, and 
> then keep spamming the server thousands of time per second to try to open new 
> connections, or ...?

My little code example was meant to show the basic principle: that the 
within-database artifacts that implement an application's backend can all be 
hidden from client code. The technique is simple and has been in use forever. 
That's why RDBMSs like PG have a privilege scheme. Object ownership and 
"current_role" are a central notions in any such scheme.

My demo depended upon an ordinary human regime of password secrecy. The key 
point that my demo made was that "client" owns nothing, lacks the "create" 
privilege on the database in question, and (for good measure) lacks "create" on 
all schemas in the database. Therefore, a session that authorizes as "client" 
is limited in what it can do.

I'm not sure what you mean to say with this fragment:

 'robert''); drop table students; --'

It rather looks like something that you see in an essay about SQL injection. 
But the entire SQL injection discussion is out of scope in my toy demo because 
the requirements statement simply allows a session that's authorized as 
"client" to issue any SQL statement. I don't know why you picked the "students" 
table when there isn't one. I just ran my demo code to completion, re-connected 
as "client", and did this:

drop table students;

It produces the "42P01: error: table "students" does not exist. Of course, the 
message isn't lying. So this is a better test:

drop view s.v;

This produces the "42501" error: must be owner of view v. This isn't a lie 
either. The hacker has now learned that, at least, such a view does exist. 
Arguably, the different between the two errors is a bad thing. And famously, in 
Oracle Database, you get a more generic "computer says no" in both cases. But 
PG is the way it is here and won't change in my lifetime. So, playing the 
hacker role, I tried this:

select definition from pg_views where schemaname = 's';

It caused the "42501" error: permission denied for view pg_views. And why 
shouldn't it? I didn't mention that I'd revoked "select" on every "pg_catalog" 
relation (and every "information_schema" relation) from public and then granted 
"select" explicitly on each to "u1" but not to "client".

This is the text-book principle of least privilege: you start with nothing and 
add what you need. For historical reasons, very few systems honor this 
principle by default. But it's an excellent feature of PG that you can overrule 
the default in the way that I described. The present toy demo works fine (all 
the tests behave the same) after my hardening intervention.

About inserting millions of rows, well... that's a word-game. The spec for my 
toy demo never mentioned that inserting millions of rows should be prevented.

There's only so far that you can go if you decide to articulate the hard-shell 
API as "use any SQL statement that you care to in order to access the intended 
app functionality". This is why the usual paradigm is to grant only "execute" 
on a designed set of subprograms that each implements a specified *business* 
function. People have been banging on about this approach since the late 
eighties (and probably since before then). Of course, the approach depends on a 
designed use of a privilege scheme. PG supports all this nicely. It's easy to 
implement an upper limit (in if-then-else code) on the number of rows that a 
procedure that implements "insert" allows. I s'pose that you'll say that the 
bad guy could call the procedure time and again. But techniques are available 
there too. (They're roughly analogous to what stops you making withdrawals from 
a bank account when the credit limit is reached.) Blocking a single "huge" row 
is trivial. Probably, a constraint that uses a SQL expression would suffice. 
But you can always implement the user-defined function for the hugeness test if 
you need to.

This leaves us with some kind of denial of service attack that uses a flavor of 
busy work or similar, like you mention. I don't think that there's any way that 
PG can prevent a connected role doing this:

do $body$
begin
  loop

  end loop;
end;
$body$;

or, say, a "select" with a recursive CTE with no stopping condition. There's 
always "set statement_timeout"—but that's in the hands of the session that 
authorizes as "client". I know of another 

Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Bryn Llewellyn
> hjp-pg...@hjp.at wrote:
> 
>> rjuju...@gmail.com wrote:
>> 
>>> b...@yugabyte.com wrote:
>>> 
>>> My demo seems to show that when a program connects as "client", it can 
>>> perform exactly and only the database operations that the database design 
>>> specified. Am I missing something? In other words, can anybody show me a 
>>> vulnerability?
>> 
>> What exactly prevents the client role from inserting e.g.
>> 
>> - 'robert''); drop table students; --'
> 
> It can do this but it won't do any harm since the client role doesn't have 
> permission to drop the table.
> 
>> - millions of 'cat' rows
>> - millions of 1GB-large rows
> 
> That depends on "the database operations that the database design specified", 
> but if the client role is supposed to be able to insert data, you can't 
> really prevent it from inserting non-sensical or enormous data. You can 
> encapsulate the insert functionality in a function or procedure and do some 
> sanity checks there. But automatically distinguishing between legitimate use 
> and abuse is generally not simple.
> 
>> or just keep sending massive invalid query texts to fill the logs, or just 
>> trying to connect until there's no available connection slots anymore, and 
>> then keep spamming the server thousands of time per second to try to open 
>> new connections, or ...?
> 
> There are often several layers of defense. The database frequently won't be 
> accessible from the open internet (or even the company network) directly. 
> Only a middle tier of application servers running vetted client code will 
> connect directly. Even those servers may not be accessible directly to end 
> users. There may be a layer of proxy servers above them. Each of these layers 
> may implement additional checks, rate limits and monitoring.

I'm afraid that I didn't see this from you until I'd already replied to 
Julien's turn in this thread. Sorry that I caused thread divergence. Thanks, 
Peter, for addressing the contribution(s) that other tiers in the stack make 
(and uniquely are able to make) in order to deliver the intended application 
functionality to the end user.



Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Julien Rouhaud
On Tue, Sep 27, 2022 at 05:27:22PM -0700, Bryn Llewellyn wrote:
> > hjp-pg...@hjp.at wrote:
> >
> >> rjuju...@gmail.com wrote:
> >>
> >>> b...@yugabyte.com wrote:
> >>>
> >>> My demo seems to show that when a program connects as "client", it can
> >>> perform exactly and only the database operations that the database design
> >>> specified. Am I missing something? In other words, can anybody show me a
> >>> vulnerability?
> >>
> >> What exactly prevents the client role from inserting e.g.
> >>
> >> - 'robert''); drop table students; --'
> >
> > It can do this but it won't do any harm since the client role doesn't have
> > permission to drop the table.

FTR it's a reference to https://xkcd.com/327/

Both of you are saying it's harmless because you're assuming that only the
client role may read the data and act on it, but the whole point of SQL
injection is to try to do actions that the role you have access to can't
already do.  And that's just a few out of dozens of examples of how having a
role connected to the database can do harm.

> >
> >> - millions of 'cat' rows
> >> - millions of 1GB-large rows
> >
> > That depends on "the database operations that the database design
> > specified", but if the client role is supposed to be able to insert data,
> > you can't really prevent it from inserting non-sensical or enormous data.
> > You can encapsulate the insert functionality in a function or procedure and
> > do some sanity checks there. But automatically distinguishing between
> > legitimate use and abuse is generally not simple.

Which is exactly what was the idea behind Tom's "if you don't trust another
session that is running as your userID, you have already lost".

> >
> >> or just keep sending massive invalid query texts to fill the logs, or just
> >> trying to connect until there's no available connection slots anymore, and
> >> then keep spamming the server thousands of time per second to try to open
> >> new connections, or ...?
> >
> > There are often several layers of defense. The database frequently won't be
> > accessible from the open internet (or even the company network) directly.
> > Only a middle tier of application servers running vetted client code will
> > connect directly. Even those servers may not be accessible directly to end
> > users. There may be a layer of proxy servers above them. Each of these
> > layers may implement additional checks, rate limits and monitoring.

If no one has direct SQL access to the database, then there's no problem with a
role being able to pg_terminate_backend() session for the same role, and this
thread shouldn't exist to begin with.

> I'm afraid that I didn't see this from you until I'd already replied to
> Julien's turn in this thread. Sorry that I caused thread divergence. Thanks,
> Peter, for addressing the contribution(s) that other tiers in the stack make
> (and uniquely are able to make) in order to deliver the intended application
> functionality to the end user.

Your whole argument in your other email was:

> Anyway... this kind of denial of service discussion is way outside the scope
> of what I addressed.

which looks like in total contradiction with your original email:

> Am I missing something? In other words, can anybody show me a vulnerability?

Again, don't give SQL access to untrusted users and you will avoid a lot of
problems, including someone abusing pg_terminate_backend().




Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Bryn Llewellyn
> rjuju...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
 hjp-pg...@hjp.at wrote:
 
> rjuju...@gmail.com wrote:
> 
>> b...@yugabyte.com wrote:
>> 
>> [Bryn] My demo seems to show that when a program connects as "client", 
>> it can perform exactly and only the database operations that the 
>> database design specified. Am I missing something? In other words, can 
>> anybody show me a vulnerability?
> 
> [Julien] What exactly prevents the client role from inserting e.g.
> 
> - 'robert''); drop table students; --'
 
 [Peter] It can do this but it won't do any harm since the client role 
 doesn't have permission to drop the table.
> 
> [Julien] For the record. it's a reference to [XKCD's "Little Bobby Tables" 
> cartoon]. Both of you are saying it's harmless because you're assuming that 
> only the client role may read the data and act on it, but the whole point of 
> SQL injection is to try to do actions that the role you have access to can't 
> already do. And that's just a few out of dozens of examples of how having a 
> role connected to the database can do harm.
> 
> [Julien]
> - millions of 'cat' rows
> - millions of 1GB-large rows
 
 [Peter] That depends on "the database operations that the database design 
 specified", but if the client role is supposed to be able to insert data, 
 you can't really prevent it from inserting non-sensical or enormous data. 
 You can encapsulate the insert functionality in a function or procedure 
 and do some sanity checks there. But automatically distinguishing between 
 legitimate use and abuse is generally not simple.
> 
> [Julien] Which is exactly what was the idea behind Tom's "if you don't trust 
> another session that is running as your userID, you have already lost".
> 
> [Julien] or just keep sending massive invalid query texts to fill the 
> logs, or just trying to connect until there's no available connection 
> slots anymore, and then keep spamming the server thousands of time per 
> second to try to open new connections, or ...?
 
 [Peter] There are often several layers of defense. The database frequently 
 won't be accessible from the open internet (or even the company network) 
 directly. Only a middle tier of application servers running vetted client 
 code will connect directly. Even those servers may not be accessible 
 directly to end users. There may be a layer of proxy servers above them. 
 Each of these layers may implement additional checks, rate limits and 
 monitoring.
> 
> [Julien] If no one has direct SQL access to the database, then there's no 
> problem with a role being able to pg_terminate_backend() session for the same 
> role, and this thread shouldn't exist to begin with.
> 
>> [Bryn] I'm afraid that I didn't see this from you until I'd already replied 
>> to Julien's turn in this thread. Sorry that I caused thread divergence. 
>> Thanks, Peter, for addressing the contribution(s) that other tiers in the 
>> stack make (and uniquely are able to make) in order to deliver the intended 
>> application functionality to the end user.
> 
> [Julien] Your whole argument in your other email was:
> 
>> [Bryn] Anyway... this kind of denial of service discussion is way outside 
>> the scope of what I addressed.
> 
> [Julien] which looks like in total contradiction with your original email:
> 
>> Am I missing something? In other words, can anybody show me a vulnerability?
> 
> [Julien] Again, don't give SQL access to untrusted users and you will avoid a 
> lot of problems, including someone abusing pg_terminate_backend().

I fear that you and I, Julien, are talking past each other. That's probably my 
fault. Any maybe there's no hope of rescue now.

My brand new thread, started here:

https://www.postgresql.org/message-id/3d119733-6784-4e84-98e4-5124e69d4...@yugabyte.com

has nothing whatsoever to do with pg_terminate_backend(). The thread that I 
started here:

https://www.postgresql.org/message-id/10f360bb-3149-45e6-bffe-10b9ae31f...@yugabyte.com

to ask about pg_terminate_backend() reached a nice "case closed" for me because 
I filled an embarrassing hole in my understanding. I see now that, in a 
database of interest, I can revoke execute on the "killer" proc and grant it to 
any role that needs it. Doing this is nothing other than following the 
principle of least privilege. It's liberating to know that "Thing X" that you 
don't need can be made impossible. And so much nicer than applying the 
"what-about-ism" approach: "Thing X" brings some risks. But so what? "Thing Y", 
and its legion cousins, bring risks too—so don't worry about preventing "Thing 
X".

Now back to my new thread. I interpreted what Tom wrote to mean that he flatly 
rejected the idea that a database design was possible that prevented a client 
session that authorized as a role, that's designed for that purpose, from 

Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Julien Rouhaud
On Tue, Sep 27, 2022 at 07:29:39PM -0700, Bryn Llewellyn wrote:
>
> Now back to my new thread. I interpreted what Tom wrote to mean that he
> flatly rejected the idea that a database design was possible that prevented a
> client session that authorized as a role, that's designed for that purpose,
> from dropping tables and otherwise arbitrarily corrupting stuff. I expect
> that I completely misunderstood his point. But, anyway, that's what I
> responded to.
>
> Now it seems that you, Julien, are not convinced that the code that I showed
> prevents a session that authorizes as "client" from dropping the table, owned
> by "u1", where the data is. Nor are you convinced that a "client" session is
> prevented from inserting mixed or upper case data, updating existing data, or
> deleting existing data. Rather (as your Bobby Tables reference indicates) you
> think that a cunning SQL injection attack can manage to do these bad things.
>
> Well... the challenge is yours now: prove your point with some working code.

I'm convinced that that authorization system works as expected, what I'm not
convinced of is that the authorization system can prevent an untrusted user
with a direct SQL access from actually hurting you.

So yes in your case maybe the "client" role cannot drop the showed table, but
it can still insert nonsensical data, from a client point of view, or lead to
outage or other problems without any difficulty, and there's nothing in the
authorization system that can prevent that.

I'm also not convinced that your demo is proving anything, as "inserting any
only value made of non-uppercase characters in a single table " isn't really
representative of any basic application, especially without knowing what that
data will be used for.

The only case this example could make sense would be a log application, and
then a direct SQL access you can insert nonsensical or malicious data,
depending on what the application will do with those data (which could lead to
crash in the client application, or make it do thing it shouldn't do).




Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Bryn Llewellyn
> rjuju...@gmail.com wrote:
> 
> b...@yugabyte.com wrote:
>> 
>> Now back to my new thread. I interpreted what Tom wrote to mean that he 
>> flatly rejected the idea that a database design was possible that prevented 
>> a client session that authorized as a role, that's designed for that 
>> purpose, from dropping tables and otherwise arbitrarily corrupting stuff. I 
>> expect that I completely misunderstood his point. But, anyway, that's what I 
>> responded to.
>> 
>> Now it seems that you, Julien, are not convinced that the code that I showed 
>> prevents a session that authorizes as "client" from dropping the table, 
>> owned by "u1", where the data is. Nor are you convinced that a "client" 
>> session is prevented from inserting mixed or upper case data, updating 
>> existing data, or deleting existing data. Rather (as your Bobby Tables 
>> reference indicates) you think that a cunning SQL injection attack can 
>> manage to do these bad things.
>> 
>> Well... the challenge is yours now: prove your point with some working code.
> 
> I'm convinced that that authorization system works as expected, what I'm not 
> convinced of is that the authorization system can prevent an untrusted user 
> with a direct SQL access from actually hurting you. So yes in your case maybe 
> the "client" role cannot drop the showed table, but it can still insert 
> nonsensical data, from a client point of view, or lead to outage or other 
> problems without any difficulty, and there's nothing in the authorization 
> system that can prevent that.
> 
> I'm also not convinced that your demo is proving anything, as "inserting any 
> only value made of non-uppercase characters in a single table" isn't really 
> representative of any basic application, especially without knowing what that 
> data will be used for.
> 
> The only case this example could make sense would be a log application, and 
> then a direct SQL access you can insert nonsensical or malicious data, 
> depending on what the application will do with those data (which could lead 
> to crash in the client application, or make it do thing it shouldn't do).

My example wasn't meant in any way to be realistic. I'm sorry if I didn't make 
that clear from the outset. It was meant only to illustrate the principles. For 
example, the "lower case only" rule was meant to be an example of *any* data 
rule. Just like the write-once-read-many auto-generated surrogate primary key 
rule. Can you show me how those data rules, unrealistic as you might think them 
to be, can be violated?

> I'm not convinced... that the authorization system can prevent an untrusted 
> user with a direct SQL access from actually hurting you.

What do you mean by "untrusted"? Any person who is given the credentials to 
start a database session is trusted—even a person who can connect as a 
superuser and do untold harm. So focus on a person who has the credentials to 
connect as "client" in my example. But imagine a design that exposes 
functionality to "client" sessions exclusively through a carefully designed and 
implemented API that's expressed exclusively with user-defined functions and 
procedures. And choose something to model that meets your criteria for realism. 
Then show me, using a self-contained code example, how a session that 
authorized as "client" can cause the hurt that concerns you. Notice that "hurt" 
must be taken to mean having the persistently stored data no longer satisfying  
as specified business rule. And not anything to do with denial of service based 
on unconstrained resource consumption.

If, when I review it, I can see how to change the code to remove the 
vulnerability, then you'll have learned something. On the other hand, if you 
can show me a vulnerability that cannot be fixed, then I'll have learned 
something! I'm selfishly more interested in that second outcome because my 
overall mental model will have been improved.





Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Tom Lane
Bryn Llewellyn  writes:
> rjuju...@gmail.com wrote:
>> I'm not convinced... that the authorization system can prevent an untrusted 
>> user with a direct SQL access from actually hurting you.

> What do you mean by "untrusted"? Any person who is given the credentials
> to start a database session is trusted—even a person who can connect as
> a superuser and do untold harm. So focus on a person who has the
> credentials to connect as "client" in my example. But imagine a design
> that exposes functionality to "client" sessions exclusively through a
> carefully designed and implemented API that's expressed exclusively with
> user-defined functions and procedures.

Sure.  That is called an application server.  What we are is a SQL server,
and that means that the API is SQL commands, and the authorization model
is what the SQL spec says it is.  (Well, okay, there's some discrepancies
between the letter of the spec and what we actually do.  But my main
point here is that any session authenticated as user X has the same
privileges as any other session authenticated as user X, so there is
not a lot of point in user X mounting defenses against user X.)

So I think the concerns you're expressing here would be better addressed
at the next level up the stack.

regards, tom lane




Re: Limiting the operations that client-side code can perform upon its database backend's artifacts

2022-09-27 Thread Julien Rouhaud
On Tue, Sep 27, 2022 at 08:47:52PM -0700, Bryn Llewellyn wrote:
>
> For example, the "lower case only" rule was meant to be an
> example of *any* data rule. Just like the write-once-read-many auto-generated
> surrogate primary key rule. Can you show me how those data rules, unrealistic
> as you might think them to be, can be violated?

An unprivileged role by definition can't ignore or drop table constraints,
that's not the problem here.  The discussion should be "is having a direct SQL
access to the same role as my carefully written application is using
problematic", and the answer is yes.

> > I'm not convinced... that the authorization system can prevent an untrusted
> > user with a direct SQL access from actually hurting you.
> 
> What do you mean by "untrusted"? Any person who is given the credentials to
> start a database session is trusted—even a person who can connect as a
> superuser and do untold harm. So focus on a person who has the credentials to
> connect as "client" in my example. But imagine a design that exposes
> functionality to "client" sessions exclusively through a carefully designed
> and implemented API that's expressed exclusively with user-defined functions
> and procedures. And choose something to model that meets your criteria for
> realism. Then show me, using a self-contained code example, how a session
> that authorized as "client" can cause the hurt that concerns you. Notice that
> "hurt" must be taken to mean having the persistently stored data no longer
> satisfying  as specified business rule. And not anything to do with denial of
> service based on unconstrained resource consumption.

You mean like if the application takes care of checking that the logged-in user
is allowed to insert data based on whatever application defined rules / user
profile, while the SQL role can simply insert data and/or call the carefully
written functions?  Yes the data will be consistent, but if your role just
transferred money from an account to another that's not really the problem.




Re: pgbackrest Help Required

2022-09-27 Thread Inzamam Shafiq
Hi Stephen,

Thanks for your response.

> Not sure what 'traditional zip method' means here, but if you've copied
the shell script out of the documentation, that's not a safe
configuration.

Conventional zip method means my archive_command and restore_command parameters 
are as follows,

archive_command = 'if test ! -d "/archives/"; then mkdir -p "/archives/"; fi; 
test ! -f "/archives/%f" && gzip < "%p" > "/archives/%f.gz"'

restore_command = 'gunzip < "/archives/%f".gz > "%p"'


> In configuration of pgbackrest, simply configure each of the PG hosts
and the repo and then set backup-standby=y.  You'll need to make sure
that the archive_command is set to pgbackrest, of course.  Having an
existing streaming replication setup is perfectly fine and pgbackrest
won't interfere with that.  You will want to set your restore command
to pgbackrest also though, in case streaming fails.

I have done configurations, I just need a confirmation whether I did it right 
or still there needs some improvement. it will be helpful if I can have a 
documentation where someone has primary and standby in place and he had 
configured pgbackrest without disturbing the replication. I have followed below 
steps, please have a look at them and suggest any imporvements if requried.


  1.  Configure passwordless SSH connectivity between hosts
  2.  Configure NFS for shared drive between hosts
  3.  Stop Standby Cluster
  4.  Prepare Primary node for pgbackrest and replication
  5.  Configure pgbackrest to backup Primary node
  6.  Setup Standby node for pgbackrest and resume replication

 *   Configure Standby cluster for pgbackrest and check backup info
 *   If standby is having replication issues, then --delta restore using 
pgbackrest
 *   Check the replication
  1.  Test Streaming Replication
  2.  Edit pgbackrest.conf on Standby Node for backup
  3.  Perform backup from Standby
  4.  Test Steaming Replication again

Regards,

Inzamam Shafiq
Sr. DBA


From: Stephen Frost
Sent: Tuesday, September 27, 2022 8:39 PM
To: Inzamam Shafiq
Cc: pgsql-general@lists.postgresql.org
Subject: Re: pgbackrest Help Required

Greetings,

* Inzamam Shafiq (inzamam.sha...@hotmail.com) wrote:
> I am in process of configuring pgbackrest, I have followed documents and 
> configured backup from backup server. I have a setup of Primary and Hot 
> Standby, when I configured pgbackrest the standby DB got un-synced and now I 
> am unable to recover it, can anyone help or share a good document where we 
> can implement pgbackrest where streaming replication is already in place 
> using traditional pg_basebackup method?
>
> Note: I want to configure pgbackrest and to take backup from standby DB, 
> however streaming replication is already there, where the archive and restore 
> commands are configured using traditional zip method.

Not sure what 'traditional zip method' means here, but if you've copied
the shell script out of the documentation, that's not a safe
configuration.

In configuration of pgbackrest, simply configure each of the PG hosts
and the repo and then set backup-standby=y.  You'll need to make sure
that the archive_command is set to pgbackrest, of course.  Having an
existing streaming replication setup is perfectly fine and pgbackrest
won't interfere with that.  You will want to set your restore command
to pgbackrest also though, in case streaming fails.

Thanks,

Stephen