Re: momjian.us is down?

2018-03-10 Thread Bruce Momjian
On Sun, Mar  4, 2018 at 11:22:56PM -0800, Igal wrote:
> 
> 
> On 03/04/2018 09:57 PM, Tom Lane wrote:
> >Igal  writes:
> >>On 03/04/2018 07:24 PM, Adrian Klaver wrote:
> >>>On 03/04/2018 05:53 PM, Igal wrote:
> >>>>I am trying to connect to http://momjian.us/ but I get connection
> >>>>timeout (from Firefox):
> >Dunno if it's related, but large parts of the US Northeast were without
> >power over the weekend due to storm damage.
> 
> Looks like the site is hosted in Philadelphia, so very possibly related. 
> I'm glad that the storm is behind you guys now.

Yes, a weather-related power outage was the cause of the 48-hour
downtime.  Sorry.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: PgUpgrade bumped my XIDs by ~50M?

2018-04-04 Thread Bruce Momjian
On Wed, Apr  4, 2018 at 05:29:46PM -0500, Jerry Sievers wrote:
> We have a large >20TB system just pg_upgraded from 9.5 to 9.6 as per the
> versions shown below.
> 
> The system does <5M transactions/day based on sum(commit + abort) from
> pg_stat_database.
> 
> Autovac is running all possible threads now and upon investigating I see
> that thousands of tables are now above the freeze threshold.  Same
> tables all appear ~50M xids older than they did yesterday and the
> upgrade was less than 24 hours ago.
> 
> I have a "safety" snap made of the system before upgrade that can be
> used for inspection.
> 
> Any ideas why the age jump?

Uh, you can read how pg_upgrade handles frozen xids in pg_upgrade.c:

https://doxygen.postgresql.org/pg__upgrade_8c_source.html#l00543

I am not sure what would have caused such a jump.  pg_upgrade brings
over the frozen values for each table, and sets the server's frozen
counter to match the old one.

If you run 'pg_dump --binary-upgrade' you will see the frozen xids being
transfered:

-- For binary upgrade, set heap's relfrozenxid and relminmxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '558', relminmxid = '1'
WHERE oid = 'public.test'::pg_catalog.regclass;

Is it possible that pg_upgrade used 50M xids while upgrading?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: PgUpgrade bumped my XIDs by ~50M?

2018-04-04 Thread Bruce Momjian
On Wed, Apr  4, 2018 at 07:13:36PM -0500, Jerry Sievers wrote:
> Bruce Momjian  writes:
> > Is it possible that pg_upgrade used 50M xids while upgrading?
> 
> Hi Bruce.
> 
> Don't think so, as I did just snap the safety snap and ran another
> upgrade on that.
> 
> And I also compared txid_current for the upgraded snap and our running
> production instance.
> 
> The freshly upgraded snap is ~50M txids behind the prod instance.

Are the objects 50M behind or is txid_current 50M different?  Higher or
lower?


> 
> If this is a not too uncommon case of users running amok, then this time
> in particular they really went off the charts :-)

I have never heard of this problem.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: PgUpgrade bumped my XIDs by ~50M?

2018-04-05 Thread Bruce Momjian
On Wed, Apr  4, 2018 at 08:29:06PM -0400, Bruce Momjian wrote:
> On Wed, Apr  4, 2018 at 07:13:36PM -0500, Jerry Sievers wrote:
> > Bruce Momjian  writes:
> > > Is it possible that pg_upgrade used 50M xids while upgrading?
> > 
> > Hi Bruce.
> > 
> > Don't think so, as I did just snap the safety snap and ran another
> > upgrade on that.
> > 
> > And I also compared txid_current for the upgraded snap and our running
> > production instance.
> > 
> > The freshly upgraded snap is ~50M txids behind the prod instance.
> 
> Are the objects 50M behind or is txid_current 50M different?  Higher or
> lower?

Uh, here is a report of a similar problem from March 6, 2018:


https://www.postgresql.org/message-id/flat/C44C73BC-6B3A-42E0-9E44-6CE4E5B5D601%40ebureau.com#c44c73bc-6b3a-42e0-9e44-6ce4e5b5d...@ebureau.com

I upgraded a very large database from 9.6 to 10.1 via pg_upgrade
recently, and ever since, the auto vacuum has been busy on a large
legacy table that has experienced no changes since the upgrade. If the
whole table had been frozen prior to the upgrade, would you expect it to
stay frozen? 

It sure smells like we have a bug here.  Could this be statistics
collection instead?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Can PostgreSQL create new WAL files instead of reusing old ones?

2018-04-27 Thread Bruce Momjian
On Wed, Apr 18, 2018 at 09:34:50AM -0400, Vick Khera wrote:
> On Wed, Apr 18, 2018 at 12:35 AM, Michael Paquier  wrote:
> 
> 
> That looks like a rather difficult problem to solve in PostgreSQL
> itself, as the operator running the cluster is in charge of setting up
> the FS options which would control the COW behavior, so it seems to me
> 
> 
> You cannot turn off CoW on ZFS. What other behavior would you refer to here?
> 
> I suppose one could make a dedicated data set for the WAL and have ZFS make a
> reservation for about 2x the total expected WAL size. It would require careful
> attention to detail if you increase WAL segments configuration, though, and if
> you had any kind of hiccup with streaming replication that caused the segments
> to stick around longer than expected (but that's no different from any other
> file system).

Uh, at the risk of asking an obvious question, why is the WAL file COW
if it was renamed?  No one has the old WAL file open, as far as I know.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: PostgreSQL : encryption with pgcrypto

2018-06-25 Thread Bruce Momjian
On Thu, May 17, 2018 at 07:07:00AM +, ROS Didier wrote:
> Hi
> 
>Regarding the encryption of data by pgcrypto, I would like to
> know the recommendations for the management of the key.
> 
>Is it possible to store it off the PostgreSQL server?
> 
>Is there the equivalent of Oracle "wallet" ?

Late reply, but the last presentation on this page shows how to use
cryptographic hardware with Postgres:

https://momjian.us/main/presentations/security.html

You could modify that to use a key management system (KMS).

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: using pg_basebackup for point in time recovery

2018-06-25 Thread Bruce Momjian
On Thu, Jun 21, 2018 at 04:50:38PM -0700, David G. Johnston wrote:
> On Thu, Jun 21, 2018 at 4:26 PM, Vik Fearing 
> wrote:
> 
> On 21/06/18 07:27, Michael Paquier wrote:
> > Attached is a patch which includes your suggestion.  What do you think?
> > As that's an improvement, only HEAD would get that clarification.
> 
> Say what?  If the clarification applies to previous versions, as it
> does, it should be backpatched.  This isn't a change in behavior, it's a
> change in the description of existing behavior.
> 
> 
> Generally only actual bug fixes get back-patched; but I'd have to say this
> looks like it could easily be classified as one.

FYI, in recent discussions on the docs list:


https://www.postgresql.org/message-id/CABUevEyumGh3r05U3_mhRrEU=dfacdrr2hew140mvn7fsbm...@mail.gmail.com

there was the conclusion that:

If it's a clean backpatch I'd say it is -- people who are using
PostgreSQL 9.6 will be reading the documentation for 9.6 etc, so they
will not know about the fix then.

If it's not a clean backpatch I can certainly see considering it, but if
it's not a lot of effort then I'd say it's definitely worth it.

so the rule I have been using for backpatching doc stuff has changed
recently.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Convert Existing Table to a Partition Table in PG10

2018-08-09 Thread Bruce Momjian
On Sun, Jul  1, 2018 at 07:09:33PM -0700, Clifford Snow wrote:
> Vic,
> I'd be happy to add my blog to planet.postgresql.org but my of my articles are
> not on postgresql. I'm using github pages for my blog and I do have tags for
> each article. Is there someone to filter on those tags? 

When you register your blog, there is an 'Authorfilter' field, but that
only filters authors, which I assume looks at the RSS "author" tag. 
Doesn't seem it can filter on category tags.

The recommended solution is to create a feed on your end just for that
category:

https://wiki.postgresql.org/wiki/Planet_PostgreSQL

All blogs should be about PostgreSQL or closely related technologies. If
you want to blog about other things as well, please put your PostgreSQL
specific posts in a separate category/tag/label, and use the feed for
this category only for Planet PostgreSQL. 

---



> 
> Clifford
> 
> On Sun, Jul 1, 2018 at 5:13 PM Vik Fearing  
> wrote:
> 
> On 02/07/18 01:43, Clifford Snow wrote:
> > David,
> > Thanks for the suggestion. That really simplifies creating the RANGE. 
> >
> > For all, I'm pretty much a postgresql novice, but I've tried to document
> > what I've learned in the hopes that it can help someone else.
> >
> > You can read my blog post
> > at  https://osm_seattle.snowandsnow.us/articles/Partitioning-Postgresql/
> 
> Please consider adding your PostgreSQL-related posts to Planet.
> https://planet.postgresql.org/add.html
> --
> Vik Fearing                                          +33 6 46 75 15 36
> http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support
> 
> 
> 
> --
> @osm_seattle
> osm_seattle.snowandsnow.us
> OpenStreetMap: Maps with a human touch

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: When exactly is a TIMESTAMPTZ converted to the sessions time zone?

2018-08-09 Thread Bruce Momjian
On Tue, Jul  3, 2018 at 05:00:17PM -0400, Tom Lane wrote:
> Thomas Kellerer  writes:
> > A recent discussion around timestamptz behaviour has lead me to question my 
> > own understanding on how a TIMESTAMPTZ is converted to the session's time 
> > zone.
> > I assumed this conversion happens *on the server* before the value is sent 
> > to the client.
> 
> It's done in the datatype's output conversion function.
> 
> > A co-worker of mine claims that this is purely a client side thing, and 
> > that the server will always send the "plain" UTC value that is stored in a 
> > timestamptz column.
> 
> Your co-worker is certainly wrong so far as text output is concerned.
> If you retrieve data in binary format, though, it looks like you get
> the raw (un-rotated) timestamp value, so that any conversion would have
> to be done on the client side.

Wow, I am kind of surprised by that.  Do any other data types have this
behavior?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-08-09 Thread Bruce Momjian
On Mon, Jul 16, 2018 at 11:57:40AM +0200, Vincenzo Campanella wrote:
> Il 16.07.2018 11:41, Albrecht Dreß ha scritto:
> >Am 16.07.18 00:14 schrieb(en) Tim Cross:
> >>>Thank you for the point. I'm the C++ programmer and I'm author of the
> >>>C++ client library for PostgreSQL - Pgfe and I'm going to use it in
> >>>this project. But I'm not sure about the cross-platform GUI toolkit.
> >>
> >>The cross-platform GUI toolkit will be the challenge.
> >
> >Try Qt <https://www.qt.io/download-qt-installer>.  It uses c++, comes with
> >a dual license (LGPL/commercial) and supports all relevant platforms:
> >- Linux: will work ootb for all distos I know, without the need to ship it
> >with libraries
> >- macOS: includes support to create the usual bundles which go into the
> >Applications folder
> >- Winbloze: works fine there, too, if you insist on a broken os ;-)
> >
> >I worked on a Qt-based oss project in the past, and it is actually trivial
> >to create binaries for all aforementioned platforms from the same sources.
> 
> That's a very good solution, IMHO.
> 
> Otherwise, WxWidgets (https://www.wxwidgets.org/) could also be a good
> solution...

PGAdmin used to use WxWidgets but left it recently for PGAdmin 4.  I
would ask them what problems caused them to stop using it.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: User documentation vs Official Docs

2018-08-10 Thread Bruce Momjian
On Fri, Jul 20, 2018 at 05:31:40PM -0700, Adrian Klaver wrote:
> JD sit down, I am going to agree with you:) The documentation as it stands
> is very good, though it requires some fore knowledge to successfully
> navigate. On pages with a lot of content it often is not evident, to many,
> that there are actually examples at the bottom of the page. Also that the
> exceptions to the rules are called out there also. The general concept of
> presenting a task, writing a procedure to accomplish the task and pointing
> to the documentation that covers the procedure would be a helpful addition.
> It would be nice to point to something like that in a post rather then
> continually rebuilding the explanation every time a new user hits the list.
> Looking at the link posted upstream:

I am jumping in late here, but I do have some thoughts on this topic. 
To me, there are three levels of information presentation:

1.  Task-oriented documents
2.  Exhaustive technical documentation/manuals
3.  Concept-level material

I think we call agree that the Postgres documentation does very well
with #2, and we regularly get complements for its quality.

For #1, this is usually related to performing a task without requiring a
full study of the topic.  For example, if I need iptables rules to block
a sunrpc attack, or use NFS over ssh, I really want some commands that I
can study and adjust to the task --- I don't want to study all the
features of these utilities to get the job done.  This is an area the
docs don't cover well, but our blogs and wikis do.

For #3, this is mostly covered by books.  This topic requires a lot of
explanation and high-level thinking.  We have some of that in our docs,
but in general books probably do this better.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Who and How is responsible for released installations packages and 3rd party packs? (e.g. on https://yum.postgresql.org/9.6/redhat/rhel-7.3-x86_64/)

2018-08-10 Thread Bruce Momjian
On Fri, Aug  3, 2018 at 10:30:51AM +0200, Pavel Raiskup wrote:
> On Friday, August 3, 2018 8:08:55 AM CEST Devrim Gündüz wrote:
> > On Thu, 2018-08-02 at 16:26 +0300, Alexandru Lazarev wrote:
> > > What about 3rd party libraries like plv8 - Who and How (based on which
> > > criteria, which versions) build RPM and upload them there?
> >
> > Latest versions of PL/v8 does not build on RHEL/Fedora anymore, at least 
> > from
> > the package build point of view.
> 
> Yes, packaging of plv8 is pretty complicated.  If one decided to ship RPM
> package with plv8, it would mean maintenance of whole v8 language - which
> is incredibly complicated (incompatible changes all the time, backporting
> security fixes, etc.).
> 
> That's the reason why plv8 (and even v8 runtime) becomes dropped from Linux
> distributions.

Uh, who is building PL/v8 currently, and for what operating systems?  No one?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: How to revoke privileged from PostgreSQL's superuser

2018-08-10 Thread Bruce Momjian
On Mon, Aug  6, 2018 at 06:19:55AM -0700, David G. Johnston wrote:
> On Monday, August 6, 2018,  wrote:
> 
> 
> I have a request for revoking the access to user's data from DBA-user.
> I think the request is right because users should be the only ones can
> access their data.
> 
> 
> User then needs to encrypt data prior to storing it.  Superuser can still
> access the data but would be challenged to make sense of it,

Keep in mind DBAs can often remove data with little detection, unless
you are using some kind of block chain, which itself can force
serialized data access, slowing things down.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Who and How is responsible for released installations packages and 3rd party packs? (e.g. on https://yum.postgresql.org/9.6/redhat/rhel-7.3-x86_64/)

2018-08-10 Thread Bruce Momjian
On Fri, Aug 10, 2018 at 09:41:44PM +0200, Christoph Berg wrote:
> Re: Bruce Momjian 2018-08-10 <20180810192205.gc7...@momjian.us>
> > Uh, who is building PL/v8 currently, and for what operating systems?  No 
> > one?
> 
> No one is likely correct.

Wow, OK.  That's bad news.  So PL/v8 is no longer a viable stored
procedure language?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: How to revoke privileged from PostgreSQL's superuser

2018-08-14 Thread Bruce Momjian
On Fri, Aug 10, 2018 at 10:34:26PM -0400, Rui DeSousa wrote:
> With that logic then you should use flat files for encrypted data and
> unencrypted data.  It’s what was done many moons ago; and its unstructured
> haphazard approach gave rise to RDBMS systems.
> 
> You cannot say that encrypted data does not belong in a RDBMS system… that is
> just false.  Hell, I’ve stored blobs in a RDMBS system which could have easily
> been stored in a different system if need be.  It’s a design choice and what
> fits the application and budget needs.
> 
> Encrypting sensitive information and storing in the database is a valid use
> case.  It may be only a few columns that are encrypted or a complete document
> (blob); there is no need to increase complexity just to move those columns out
> of the database.

I think the point is that it makes sense to store data encrypted in a
database only if it is a payload on another piece of non-encrypted data.
You can't easily index, restrict, or join encrypted data, so it doesn't
have a huge value alone in a database.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: How to revoke privileged from PostgreSQL's superuser

2018-08-14 Thread Bruce Momjian
On Fri, Aug 10, 2018 at 04:06:40PM -0400, Benedict Holland wrote:
> I also would take Bruce's comment with a massive grain of salt. Everything 
> that
> everyone does on a database is logged somewhere assuming proper logging. Now 
> do
> you have the person-power to go through gigs of plain text logs to find out if
> someone is doing something shady... that is a question for your management
> team. Also, if you suspect someone of doing something shady, you should
> probably revoke their admin rights. 

Agreed, the best way to limit the risk of undetected DBA removal of data
is secure auditing --- I should have mentioned that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: How to revoke privileged from PostgreSQL's superuser

2018-08-15 Thread Bruce Momjian
On Tue, Aug 14, 2018 at 03:59:19PM -0400, Bruce Momjian wrote:
> On Fri, Aug 10, 2018 at 04:06:40PM -0400, Benedict Holland wrote:
> > I also would take Bruce's comment with a massive grain of salt. Everything 
> > that
> > everyone does on a database is logged somewhere assuming proper logging. 
> > Now do
> > you have the person-power to go through gigs of plain text logs to find out 
> > if
> > someone is doing something shady... that is a question for your management
> > team. Also, if you suspect someone of doing something shady, you should
> > probably revoke their admin rights. 
> 
> Agreed, the best way to limit the risk of undetected DBA removal of data
> is secure auditing --- I should have mentioned that.

So, how do you securely audit?  You ship the logs to a server that isn't
controlled by the DBA, via syslog?  How do you prevent the DBA from
turning off logging when the want to so something undetected?  Do you
log the turning off of logging?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Code of Conduct plan

2018-08-15 Thread Bruce Momjian
On Wed, Aug 15, 2018 at 03:22:10PM -0400, Stephen Frost wrote:
> Greetings,
> 
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
> > Unless there are substantial objections, or nontrivial changes as a result
> > of this round of comments, we anticipate making the CoC official as of
> > July 1 2018.
> 
> We seem to be a bit past that timeline...  Do we have any update on when
> this will be moving forward?
> 
> Or did I miss something?

Are we waiting for the conference community guidlines to be solidified?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: pg_upgrade fails saying function unaccent(text) doesn't exist

2018-08-29 Thread Bruce Momjian
On Wed, Aug 29, 2018 at 06:09:53PM +, Gunnlaugur Thor Briem wrote:
> Hi,
> 
> I'm trying to upgrade an old PG 9.4 cluster to PG 10, and it's failing on
> creating an index that uses the unaccent(text) function.
> 
> That function is part of the unaccent extension, which is installed in the old
> DB cluster. I expect pg_upgrade to create that extension as part of the
> upgrade. It does create other extensions that are installed in the old DB
> cluster. I don't get why this one isn't included.

This is caused by security changes made in PG 10.3 and other minor
releases.  Please see this thread for an outline of the issue:

   
https://www.postgresql.org/message-id/flat/152106914669.1223.5104148605998271987%40wrigleys.postgresql.org

I think you have to change your index function to specify the schema
name before the unacces function call, e.g.

SELECT lower(public.unaccent(btrim(regexp_replace(

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Code of Conduct plan

2018-09-15 Thread Bruce Momjian
On Sat, Sep 15, 2018 at 08:44:10AM +0200, Chris Travers wrote:
> The protection there is a culturally diverse code of conduct committee who can
> then understand the relationship between politics and culture.  And just to
> note, you can't solve problems of abuse by adopting mechanistically applied
> rules.
> 
> Also a lot of the major commercial players have large teams in areas where
> there is a legal right to not face discrimination on the basis of political
> opinion.  So I don't see merely expressing an unpopular political opinion as
> something the code of conduct committee could ever find actionable, nor do I
> think political donations or membership in political or religious 
> organizations
> etc would be easy to make actionable.

Well, we could all express our unpopular opinions on this channel and
give it a try.  ;-)  I think some have already, and nothing has happened
to them.  With a CoC, I assume that will remain true.

> But I understand the sense of insecurity.  Had I not spent time working in 
> Asia
> and Europe, my concerns would be far more along these lines.  As it is, I 
> don't
> think the code of conduct committee will allow themselves to be used to cause
> continental splits in the community or to internationalize the politics of the
> US.

Agreed, and that is by design.  If anything, the CoC team plus the core
team have even more diversity than the core team alone.

> I think the bigger issue is that our community *will* take flak and possibly 
> be
> harmed if there is an expectation set that picking fights in this way over
> political opinions is accepted.  Because while I don't see the current
> community taking action on the basis of political views, I do see a problem
> more generally with how these fights get picked and would prefer to see some
> softening of language to protect the community in that case.  But again, I am
> probably being paranoid.

Well, before the CoC, anything could have happened since there were no
rules at all about how such problems were handled, or not handled. 

There is a risk that if we adopt a CoC, and nothing happens, and the
committee does nothing, that they will feel like a failure, and get
involved when it was best they did nothing.  I think the CoC tries to
address that, but nothing is perfect.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Code of Conduct plan

2018-09-15 Thread Bruce Momjian
On Sat, Sep 15, 2018 at 11:32:06AM -0400, Bruce Momjian wrote:
> There is a risk that if we adopt a CoC, and nothing happens, and the
> committee does nothing, that they will feel like a failure, and get
> involved when it was best they did nothing.  I think the CoC tries to
> address that, but nothing is perfect.

I think this is Parkinson's law:

https://en.wikipedia.org/wiki/Parkinson%27s_law

We might want to put something in the next draft CoC saying that the
committee is a success if it does nothing.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Code of Conduct plan

2018-09-15 Thread Bruce Momjian
On Sat, Sep 15, 2018 at 04:24:38PM +, Martin Mueller wrote:
> What counts as foul language has changed a great deal in the last two 
> decades. 
> You could always tie it to what is printable in the New York Times, but that
> too is changing. I could live with something like “Be considerate, and if you
> can’t be nice, be at least civil”.

I have to admit I am surprised how polite the language is here,
considering how crudely some other open source projects communicate.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Code of Conduct

2018-09-19 Thread Bruce Momjian
On Wed, Sep 19, 2018 at 11:24:29AM +1000, Julian Paul wrote:
> It's overly long and convoluted.
> 
> "inclusivity" Is a ideologue buzzword of particular individuals that offer
> very little value apart from excessive policing of speech and behaviour
> assumed to be a problem where none exist.
> 
> "Personal attacks and negative comments on personal characteristics are
> unacceptable, and will not be permitted. Examples of personal
> characteristics include, but are not limited to age, race, national origin
> or ancestry, religion, gender, or sexual orientation."
> 
> So just leaving it at "Personal attacks" and ending it there won't do
> obviously. I'm a big advocate of people sorting out there own personal
> disputes in private but...
> 
> "further personal attacks (public or *private*);"
> 
> ...lets assume people don't have the maturity for that and make it all
> public.
> 
> "may be considered offensive by fellow members" - Purely subjective and
> irrelevant to a piece of community software.

You might notice that a bullet list was removed and those example items
were added 18 months ago:


https://wiki.postgresql.org/index.php?title=Code_of_Conduct&diff=31924&oldid=29402

I realize that putting no examples has its attractions, but some felt
that having examples would be helpful.  I am not a big fan of the
"protected groups" concept because it is often exploited, which is why
they are listed more as examples.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Code of Conduct

2018-09-20 Thread Bruce Momjian
On Thu, Sep 20, 2018 at 05:20:55PM +0200, Chris Travers wrote:
> I suspect most of us could probably get behind the groups listed in the
> antidiscrimination section of the European Charter of Fundamental Rights at
> least as a compromise.
> 
> Quoting the  English version:
> 
> "Any discrimination based on any ground such as sex, race, colour, ethnic or
> social origin, genetic features, language, religion or belief, political or 
> any
> other opinion, membership of a national minority, property, birth, disability,
> age or sexual orientation shall be prohibited."
> 
> The inclusion of "political or any other opinion" is a nice addition and
> prevents a lot of concern.

Huh.  Certainly something to consider when we review the CoC in a year.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: Code of Conduct plan

2018-10-11 Thread Bruce Momjian
On Thu, Sep 20, 2018 at 07:12:22AM +0200, Chris Travers wrote:
> If we have a committer who loudly and proudly goes to neo-nazi rallies or
> pickup artist / pro-rape meetups, then actually yes, I have a problem with
> that. That impacts my ability to work in the community, impacts everyone's
> ability to recruit people to work on Postgres, potentially makes people
> reluctant to engage with the community, etc.
> 
> There's a problem here though. Generally in Europe, one would not be able to
> fire a person or even discriminate against him for such activity.  So if you
> kick someone out of the PostgreSQL community for doing such things in, say,
> Germany but their employer cannot fire them for the same, then you have a real
> problem if improving PostgreSQL is the basis of their employment.    EU
> antidiscrimination law includes political views and other opinions so
> internationally that line is actually very hard to push in an international
> project.  So I think you'd have a problem where such enforcement might 
> actually
> lead to legal action by the employer, or the individual kicked out, or both.

Yes, I had the same reaction.  Activity not involving other Postgres
members seems like it would not be covered by the CoC, except for
"behavior that may bring the PostgreSQL project into disrepute", which
seems like a stretch.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: DRY up GUI wiki pages

2019-07-10 Thread Bruce Momjian
On Wed, Jul 10, 2019 at 09:08:30AM -0600, Roger Pack wrote:
> I learned today there are "two" wiki pages for GUI clients:
> 
> https://wiki.postgresql.org/wiki/PostgreSQL_Clients
> 
> https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
> 
> I'd like to DRY them up so there aren't two lists which confuses
> newcomers.  Any objections?  If not I'll probably make one of those
> pages into GUI's and one into "non GUI's" or something like that.

Agreed, a cleanup would be nice.  :-)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Matching pgp_sym_encrypt() and gpg2 output

2019-08-27 Thread Bruce Momjian
I am trying to generate output from the command-line program gpg2 that
matches the output of pgp_sym_encrypt().  gpg2 outputs:

$ echo 'my access password' | tr -d '\n' | gpg2 --symmetric --batch
> --cipher-algo AES256 --passphrase 'abc' | xxd -p | tr -d '\n'

8c0d0409030248b24f9d2bc91287f5d24701...

while pgp_sym_encrypt() outputs:

SELECT pgp_sym_encrypt('my access password', 'abc', 
'cipher-algo=aes256');

 pgp_sym_encrypt

 \xc30d0409030282dbcc61c149fd4b67d24...

I realize the \x is from the bytea output function, but the hex digits
don't match, and the gpg2 output is slightly longer than the
pgp_sym_encrypt() output.  What gpg2 options will allow it to match?
Thanks.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Matching pgp_sym_encrypt() and gpg2 output

2019-08-27 Thread Bruce Momjian
On Tue, Aug 27, 2019 at 02:05:28PM -0400, Jeff Janes wrote:
> On Tue, Aug 27, 2019 at 1:33 PM Bruce Momjian  wrote:
> 
> I am trying to generate output from the command-line program gpg2 that
> matches the output of pgp_sym_encrypt().  gpg2 outputs:
> 
>         $ echo 'my access password' | tr -d '\n' | gpg2 --symmetric 
> --batch
>         > --cipher-algo AES256 --passphrase 'abc' | xxd -p | tr -d '\n'
> 
> 
> I don't even get the same output on repeated execution of this same command, 
> so
> I think you are pursuing a lost cause.

Ah, very good point.  I at least get the same first few bytes and same
length each time, so I never even looked at the trailing bytes, and you
are right that it isn't surprising they differ each time.

Anyway, I figured it out.  I was originally trying to use openssl to
match pgp_sym_encrypt(), and that wasn't working, and then when I
couldn't get the gpg2 bytes to match, I asked here.  Seems I got it
working with the attached SQL file.  Thanks.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


key-pass-test.sql
Description: application/sql


Re: pg_upgrade (Checking for reg* data types)

2019-09-30 Thread Bruce Momjian
On Mon, Sep 30, 2019 at 11:15:47AM +0200, Gerrit Fouche wrote:
> 
> Hi All,
> 
> I am trying to upgrade Postgresql 11.5 to 12 RC1, OS Centos 6. Running
> pg_upgrade with --check it fails with "Checking for reg* data types in user
> tables" The table listed are: pg_ts_dict,pg_ts_parser. Trying to drop the
> tables I get msg " permission denied: "pg_ts_dict" is a system catalog" I also
> for same reason can not remove the oids "ALTER TABLE pg_ts_parser set without
> oids;"
> 
> From pg_catalog.pg_extension the following are listed:
> plpgsql,pg_stat_statements,tablefunc. So the Text search extension is not in
> use.
> 
> This database was upgrade since Postgresql 8.3 by using "pg_dumpall -p 5432 |
> psql -d postgres -p 5433" and not pg_upgrade.

Wow, 8.3 --- that is old.  Please do psql \d on those two tables and
show us the output.  System tables are created by pg_upgrade as fresh,
not copied from the old cluster, so I am confused how you have reg*
entries in there.

Also, I wonder if there are pg_ts_dict and pg_ts_parser tables that are
not in pg_catalog, but in some other schema, and those are what is
complaining about.  (The pg_upgrade query specifically skips checking
pg_catalog tables.)

I think maybe pg_upgrade should always output the schema name for such
objects --- I think someone propsed a patch for that recently.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: pg_upgrade (Checking for reg* data types)

2019-09-30 Thread Bruce Momjian
On Mon, Sep 30, 2019 at 11:15:47AM +0200, Gerrit Fouche wrote:
> 
> Hi All,
> 
> I am trying to upgrade Postgresql 11.5 to 12 RC1, OS Centos 6. Running
> pg_upgrade with --check it fails with "Checking for reg* data types in user
> tables" The table listed are: pg_ts_dict,pg_ts_parser. Trying to drop the
> tables I get msg " permission denied: "pg_ts_dict" is a system catalog" I also
> for same reason can not remove the oids "ALTER TABLE pg_ts_parser set without
> oids;"

PG 12 outputs the schema.table.column names, separated by dots.  I need
to see the exact error output.  This is the check query:

"SELECT n.nspname, c.relname, a.attname "
"FROM   pg_catalog.pg_class c, "
"   pg_catalog.pg_namespace n, "
"   pg_catalog.pg_attribute a, "
"   pg_catalog.pg_type t "
"WHERE  c.oid = a.attrelid AND "
"   NOT a.attisdropped AND "
"   a.atttypid = t.oid AND "
"   t.typnamespace = "
"   (SELECT oid FROM pg_namespace "
"WHERE nspname = 'pg_catalog') AND"
"   t.typname IN ( "
/* regclass.oid is preserved, so 'regclass' is OK */
"   'regconfig', "
"   'regdictionary', "
"   'regnamespace', "
"   'regoper', "
"   'regoperator', "
"   'regproc', "
"   'regprocedure' "
    /* regrole.oid is preserved, so 'regrole' is OK */
/* regtype.oid is preserved, so 'regtype' is OK */
"   ) AND "
"   c.relnamespace = n.oid AND "
"   n.nspname NOT IN ('pg_catalog', 
'information_schema')");


-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Clarification on the release notes of postgresql 12 regarding pg_upgrade

2019-10-04 Thread Bruce Momjian
On Fri, Oct  4, 2019 at 09:43:31AM -0700, Peter Geoghegan wrote:
> On Fri, Oct 4, 2019 at 9:09 AM Tom Lane  wrote:
> > > You can't REINDEX safely regarding that note.
> >
> > Actually running into that problem is quite unlikely; and if you did
> > hit it, it'd just mean that the REINDEX fails, not that you have any
> > urgent problem to fix.  I'd encourage you to just go ahead and REINDEX,
> > if you have indexes that could benefit from the other changes.
> 
> Right. It is hard to imagine an application that evolved to fully rely
> on the previous slightly higher limit, and cannot tolerate a reduction
> in the limit by only 8 bytes. The limit applies to a tuple *after*
> TOAST compression has been applied.

Right.  Pg_upgrade is fast, but we don't want it limiting file format
changes that can improve Postgres.  Allowing REINDEX to fix things is
the best of both worlds --- fast upgrades, and after some REINDEX-ing,
faster Postgres.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Event Triggers and Dropping Objects

2019-10-07 Thread Bruce Momjian
On Sat, Oct  5, 2019 at 10:50:14AM +0200, Luca Ferrari wrote:
> On Fri, Oct 4, 2019 at 10:38 PM Miles Elam  wrote:
> >
> > The event trigger firing matrix lists tags like DROP TABLE and DROP 
> > FUNCTION are listed below the ddl_command_end event, but when I created a 
> > basic audit table and event trigger, they don't seem to fire. I know 
> > sql_drop exists, but again the matrix lists DROP commands in the 
> > ddl_command_end event.
> 
> Yes, I think this is a little misleading:
> <https://www.postgresql.org/docs/11/event-trigger-matrix.html>.
> The ddl_command_end is issued, and the function is invoked, but
> pg_event_trigger_ddl_commands() returns NULL on such invocation
> because sql_drop is the event with the attached data.

Do the Postgres docs need improvement here?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: logging proxy

2019-11-07 Thread Bruce Momjian
On Fri, Nov 1, 2019 at 01:58:10AM +0300, Олег Самойлов
wrote:
> Does anyone know PostgresQL proxy which can log queries with username,
> ip and affected rows for security reason. PostgresQL itself can log
> almost all, except affected rows.

You can use the server logs to get the non-row information, then use
streaming replication with logical decoding to get the rows for each
transaction started by the user.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: pgpool High Availability Issue

2019-12-23 Thread Bruce Momjian
The pgpool email lists are the right place to ask this question:

https://www.pgpool.net/mediawiki/index.php/Mailing_lists

---

On Fri, Nov 15, 2019 at 11:04:22AM -0800, a venkatesh wrote:
> Hi, 
> 
> I'm working on configuring high availability for pgpool using watchdog.
> Initially, I tried with two pgpool nodes (along with a pgmaster and pgslave).
> In this scenario, assuming pgpool node 1 was started first and became the
> leader. After sometime , the node got disconnected with  pgpool node 2 and
> pgpool node 2 as well declared itself as leader.                    
> 
> To handle this kind of scenario, I tried provisioning an additional pgpool 
> node
> and made a cluster with total 5 nodes (3 pgpool nodes, 1 pgmaster and 1
> pgslave), assuming it will create a quorum to handle such situations.
> Unfortunately, the situation still remains the same. (In case of any
> disconnection between node that became leader and the first stand by node, 
> both
> the nodes try to manage the pgmaster and slave simultaneously).
> 
> Please help me understand if this is expected behavior or some additional
> configurations are required to be made, so that two pgpool nodes don't become
> leader simultaneously.  If it's an expected behavior, how can we handle this 
> ? 
> 
> (A point to note is that I'm not using elastic IP address here, instead I have
> created a network load balancer in AWS, created a target group with all the
> three pgpool nodes as targets). 
> 
> Regards, 
> Venkatesh. 

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Date created for tables

2019-12-23 Thread Bruce Momjian
On Thu, Dec  5, 2019 at 05:10:20PM +, Chloe Dives wrote:
> Having moved to PostgreSQL from Oracle a few years ago I have been generally
> very impressed by Postgres, but there are a few things that I still miss. One
> of those is being able to see the created and last modified dates for database
> objects.
> 
>  
> 
> Is this something that has been considered for implementation?

I wrote a blog about this:

https://momjian.us/main/blogs/pgblog/2017.html#November_21_2017

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: calculating the MD5 hash of role passwords in C

2020-01-23 Thread Bruce Momjian
On Thu, Jan 23, 2020 at 05:15:37PM +0100, Christoph Moench-Tegeder wrote:
> ## Matthias Apitz (g...@unixarea.de):
> 
> > > The documentation on pg_authid has the details:
> > > "The MD5 hash will be of the user's password concatenated to their user 
> > > name."
> > > https://www.postgresql.org/docs/12/catalog-pg-authid.html
> > 
> > This is still not exactly what I was looking for. But has an interesting
> > detail (salting the role password by adding the role name to it). An
> > implementation with UNIX crypt(3) for MD5 would need an additional salt
> > like '$1$salt' to encrypt 'sisis123sisis'.
> 
> It's not crypt(3). It's "the MD5 hash of the user's password concatenated
> to their user name".
> Try:
> perl -MDigest::MD5 -e 'print("md5" . Digest::MD5::md5_hex("sisis123" . 
> "sisis") . "\n");'

FYI, this is documented:

https://www.postgresql.org/docs/12/protocol-flow.html#id-1.10.5.7.3
AuthenticationMD5Password

The frontend must now send a PasswordMessage containing the password
(with user name) encrypted via MD5, then encrypted again using the
4-byte random salt specified in the AuthenticationMD5Password message.
If this is the correct password, the server responds with an
AuthenticationOk, otherwise it responds with an ErrorResponse. The
actual PasswordMessage can be computed in SQL as concat('md5',
--> md5(concat(md5(concat(password, username)), random-salt))). (Keep in
mind the md5() function returns its result as a hex string.)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-25 Thread Bruce Momjian
On Wed, Mar 25, 2020 at 05:46:27PM +0200, Marius Andreiana wrote:
> Thanks Tom, that makes sense. Appreciate your time to explain the context.
> 
> I'll followup with Heroku.

Also, I have heard PL/V8 is very hard to build for packagers (because of
changes by Google in the way V8 is packaged), which has decreased PL/V8
adoption.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-26 Thread Bruce Momjian
On Thu, Mar 26, 2020 at 10:07:48AM +0300, Ivan Panchenko wrote:
> 
> On 26.03.2020 03:50, Bruce Momjian wrote:
> > On Wed, Mar 25, 2020 at 05:46:27PM +0200, Marius Andreiana wrote:
> > > Thanks Tom, that makes sense. Appreciate your time to explain the context.
> > > 
> > > I'll followup with Heroku.
> > Also, I have heard PL/V8 is very hard to build for packagers (because of
> > changes by Google in the way V8 is packaged), which has decreased PL/V8
> > adoption.
> 
> True.
> 
> Btw, Nginx made their own JS implementation
> 
> https://nginx.org/en/docs/njs/index.html
> 
> It might be easier in integration than V8.

Oh, that sounds promising.  :-)

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Using of --data-checksums

2020-04-10 Thread Bruce Momjian
On Wed, Apr  8, 2020 at 11:54:34AM -0400, Stephen Frost wrote:
> Greetings,
> 
> * BGoebel (b.goe...@prisma-computer.de) wrote:
> > initdb --data-checksums "... help to detect corruption by the I/O system"
> > There is an (negligible?) impact on performance, ok. 
> >  
> > Is there another reason NOT to use this feature ?
> 
> Not in my view.
> 
> > Has anyone had good or bad experience with the use of  --data-checksums?
> 
> Have had good experience with it.  We should really make it the default
> already.

Yeah, but I think we wanted more ability to change an existing cluster
before doing that since it would affect pg_upgraded servers.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Checking for Presence of Required Libraries Fails during PostgreSQL Upgrade

2020-05-01 Thread Bruce Momjian
On Fri, May  1, 2020 at 03:03:36PM -0700, TalGloz wrote:
> I'm trying to upgrade my PostgreSQL form 10.2 to 11.7 (and eventually to
> 12.x), when I run:

FYI, you can go from PG 10 to 12 directly, without doing PG 11.

---


> 
>  /usr/pgsql-11/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin/
> --new-bindir=/usr/pgsql-11/bin/ --old-datadir=/var/lib/pgsql/10/data
> --new-datadir=/var/lib/pgsql/11/data --check
> 
> I get:
> 
> Checking for presence of required libraries   fatal
> Your installation references loadable libraries that are missing from the
> new installation.  You can add these libraries to the new installation,
> or remove the functions using them from the old installation.  A list of
> problem libraries is in the file:
> loadable_libraries.txt" 
> 
> my loadable_libraries.txt file shows:
> 
> could not load library "myfunc": ERROR:  could not access file "myfunc": No
> such file or directory
> could not load library "randomness": ERROR:  could not access file
> "randomness": No such file or directory
> could not load library "seal_diff_cpp": ERROR:  could not access file
> "seal_diff_cpp": No such file or directory
> could not load library "seal_mean_cxx": ERROR:  could not access file
> "seal_mean_cxx": No such file or directory
> could not load library "seal_mean_cxx_v2": ERROR:  could not access file
> "seal_mean_cxx_v2": No such file or directory
> could not load library "seal_variance_cxx": ERROR:  could not access file
> "seal_variance_cxx": No such file or directory
> could not load library "seal_diff_benchmark": ERROR:  could not access file
> "seal_diff_benchmark": No such file or Directory
> 
> Running:
> SELECT proname, probin, pronamespace 
> FROM pg_proc 
> WHERE probin IS NOT NULL AND pronamespace = 2200
> 
> Results in:
> 
> | proname  | probin   | pronamespace
> | bytea_size| randomness  | 2200 
> | sum_of_numbers  | myfunc | 2200 
> | seal_diff_cpp| seal_diff_cpp | 2200 
> | seal_mean_cxx_v2| seal_mean_cxx_v2| 2200 
> | seal_variance_cxx | seal_variance_cxx | 2200 
> | seal_diff_benchmark | seal_diff_benchmark  | 2200 
> | seal_mean_cxx | seal_mean_cxx | 2200 
> 
> All the 7 problematic libraries (.so) are manually created c extension
> functions. I can't just move them to /usr/pgsql-11/lib/ because they we
> compiled with PostgresSQL10 and not 11.
> 
> How can I resolve this issue? How can I delete them properly if porting them
> to 11 is a painful process? Deleting them directly form /usr/pgsql-10/lib/
> doesn't help.
> 
> 
> 
> --
> Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
> 
> 

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: GPG signing

2020-06-03 Thread Bruce Momjian
On Wed, May 27, 2020 at 10:14:46AM +1000, Tim Cross wrote:
> 
> Marc Munro  writes:
> 
> > I need to be able to cryptographically sign objects in my database
> > using a public key scheme.
> >
> > Is my only option to install plpython or some such thing?   Python
> > generally makes me unhappy as I'm never convinced of its stability or
> > the quality of its APIs, and it is not obvious to me which of the many
> > different gpg-ish packages I should choose.
> >
> > Any other options?  Am I missing something?
> >
> 
> This is something you would normally implement at the application level,
> using the database as just the store for the data and signatures or
> signed digests.
> 
> Main reason for this is to allow for secure key management. It is very
> difficult to implement a secure key management solution at the database
> level unless it is designed into the fundamental architecture of the
> rdbms. It is the same age old problem - how can you encrypt data AND
> have the keys for the encrypted data in the same place. The main reason
> for encryption is so that if your store gets compromised, the data
> cannot be read. However, if your key is also in the store, then when
> your compromised, your key is compromised and your encryption becomes a
> mute issue.

This blog entry illustrates row signing on the client side:

https://momjian.us/main/blogs/pgblog/2018.html#September_7_2018

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Oracle vs. PostgreSQL - a comment

2020-06-03 Thread Bruce Momjian
On Sun, May 31, 2020 at 11:26:07PM +1000, Tim Cross wrote:
> Yes, even after longer time doing Oracle, I still never felt as
> comfortable or across things as much as I do with PG. Started with
> Oracle 7 and stayed until 11g and each year, it got worse rather than better.
> 
> After working as a DBA, I know exactly what you mean. Sometimes, DBA has
> to equal "Don't Bother Asking". 
> 
> As a developer, I have to admit being somewhat embarrassed by the
> frequently poor understanding amongst many developers regarding the
> technology they are using. I've never understood this. I come across
> developers all the time who are completely clueless once outside their
> IDE or editor. Too often, they have little understanding of the hosting
> environment, the base protocols they are using, the RDBMS or even basic
> SQL. I don't understand how you can develop anything of quality if you
> don't have a thorough understanding of all the technology involved.
> 
> I'm  probably just a dinosaur - I also prefer VI and Emacs as my primary
> development environments and will use psql and sqlplus before Taod,
> pgAdmin, sqlDeveloper etc. 

In my many years as a DBA/developer, I have learned that understanding
the low-level stuff, even down to the CPU behavior, allows debugging of
problems much more efficiently, to the point where it looks like magic
when you can quickly point out the problem.  Also, the low-level stuff
rarely changes, so once you understand it, you can use it forever.  The
big problem is getting people to see the value in learning that stuff
when they don't have an immediate need --- curiosity helps  with
motivation.  :-)

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_dump of database with numerous objects

2020-06-13 Thread Bruce Momjian
On Wed, Jun  3, 2020 at 04:10:55PM -0400, Alvaro Herrera wrote:
> On 2020-May-31, t...@exquisiteimages.com wrote:
> 
> > I am now needing to upgrade to a new version of PostgreSQL and I am running
> > into problems when pg_upgrade calls pg_dump. pg_dump stalled at: "pg_dump:
> > saving database definition" for 24 hours before I killed the process.
> > 
> > My pg_class table contains 9,000,000 entries and I have 9004 schema.
> 
> We've made a number of performance improvements to pg_dump so that it
> can dump databases that are "large" in several different dimensions, but
> evidently from your report it is not yet good enough when it comes to
> dumping millions of tables in thousands of schemas.  It will probably
> take some profiling of pg_dump to figure out where the bottleneck is,
> and some careful optimization work in order to make it faster.  Not a
> weekend job, I'm afraid :-(

FYI, we never actually found what version of pg_dump was being used,
since pg_upgrade uses the pg_dump version in the newer cluster.  We only
know the user is coming _from_ 9.3.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_dump of database with numerous objects

2020-06-13 Thread Bruce Momjian
On Sun, May 31, 2020 at 02:02:08PM -0700, Adrian Klaver wrote:
> On 5/31/20 1:38 PM, Christophe Pettus wrote:
> > 
> > 
> > > On May 31, 2020, at 13:37, Adrian Klaver  
> > > wrote:
> > > 
> > > Just a reminder that the OP's original issue was with using pg_upgrade.
> > 
> > True, although IIRC pg_ugprade uses pg_dump under the hood to do the schema 
> > migration.
> 
> Again true, but pg_upgrade will not work older to newer:
> 
> /usr/local/pgsql11/bin/pg_upgrade  --check -U postgres -d
> /usr/local/pgsql11/data -D /usr/local/pgsql12_up/data -b
> /usr/local/pgsql11/bin -B /usr/local/pgsql12/bin
> 
> Performing Consistency Checks
> -
> Checking cluster versions
> This utility can only upgrade to PostgreSQL version 11.
> Failure, exiting

This is saying you can only use pg_upgrade 11.X to upgrade _to_ Postgres
11.X.  If you want to upgrade to 12, you have to use pg_upgrade from 12.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Oracle vs. PostgreSQL - a comment

2020-06-13 Thread Bruce Momjian
On Wed, Jun  3, 2020 at 08:53:45PM +0200, Andreas Joseph Krogh wrote:
> I agree these are all technical issues, but nevertheless - "implementation
> details", which DBAs don't care about. What's important from a DBA's
> perspective is not whether WAL is cluster-wide or database-wide, but whether
> it's possible to manage backups/PITR/restores of individual databases in a 
> more
> convenient matter, which other RDBMS-vendors seem to provide.
>  
> I love PG, have been using it professionally since 6.5, and our company 
> depends
> on it, but there are things other RDBMS-vendors do better...

The bigger issue is that while we _could_ do this, it would add more
problems and complexity, and ultimately, I think would make the
software less usable overall and would be a net-negative.  We know of no
way to do it without a ton of negatives.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Oracle vs. PostgreSQL - a comment

2020-06-13 Thread Bruce Momjian
On Tue, Jun  2, 2020 at 03:45:08PM -0400, Ravi Krishna wrote:
> 
> > 
> > Generally speaking, I discourage having lots of databases under one PG
> > cluster for exactly these kinds of reasons.  PG's individual clusters
> > are relatively lightweight, after all.
> > 
> 
> Plus PG does not directly support cross database queries using 3 part name, 
> something
> sqlserver excels at.

We consider the lack of this ability to be a security benefit. 
Cross-container queries can be done using schemas.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Oracle vs. PostgreSQL - a comment

2020-06-13 Thread Bruce Momjian
On Sat, Jun 13, 2020 at 05:06:37PM -0500, Ron wrote:
> On 6/13/20 1:46 PM, Bruce Momjian wrote:
> > On Wed, Jun  3, 2020 at 08:53:45PM +0200, Andreas Joseph Krogh wrote:
> > > I agree these are all technical issues, but nevertheless - "implementation
> > > details", which DBAs don't care about. What's important from a DBA's
> > > perspective is not whether WAL is cluster-wide or database-wide, but 
> > > whether
> > > it's possible to manage backups/PITR/restores of individual databases in 
> > > a more
> > > convenient matter, which other RDBMS-vendors seem to provide.
> > > I love PG, have been using it professionally since 6.5, and our company 
> > > depends
> > > on it, but there are things other RDBMS-vendors do better...
> > The bigger issue is that while we _could_ do this, it would add more
> > problems and complexity, and ultimately, I think would make the
> > software less usable overall and would be a net-negative.  We know of no
> > way to do it without a ton of negatives.
> 
> How do other RDBMSs do it with ease?  (I know it's an architectural issue,
> but what's the architectural issue?)

I don't know.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Oracle vs. PostgreSQL - a comment

2020-06-14 Thread Bruce Momjian
On Sat, Jun 13, 2020 at 09:27:25PM -0400, Bruce Momjian wrote:
> On Sat, Jun 13, 2020 at 05:06:37PM -0500, Ron wrote:
> > On 6/13/20 1:46 PM, Bruce Momjian wrote:
> > > On Wed, Jun  3, 2020 at 08:53:45PM +0200, Andreas Joseph Krogh wrote:
> > > > I agree these are all technical issues, but nevertheless - 
> > > > "implementation
> > > > details", which DBAs don't care about. What's important from a DBA's
> > > > perspective is not whether WAL is cluster-wide or database-wide, but 
> > > > whether
> > > > it's possible to manage backups/PITR/restores of individual databases 
> > > > in a more
> > > > convenient matter, which other RDBMS-vendors seem to provide.
> > > > I love PG, have been using it professionally since 6.5, and our company 
> > > > depends
> > > > on it, but there are things other RDBMS-vendors do better...
> > > The bigger issue is that while we _could_ do this, it would add more
> > > problems and complexity, and ultimately, I think would make the
> > > software less usable overall and would be a net-negative.  We know of no
> > > way to do it without a ton of negatives.
> > 
> > How do other RDBMSs do it with ease?  (I know it's an architectural issue,
> > but what's the architectural issue?)
> 
> I don't know.

I don't know the details, but I do know the general issues.  Other
vendors must have sacrificed architectural simplicity, features,
reliability, or performance to allow these things.  For example, it
wouldn't be hard to just make databases another level of container above
schemas to allow for simple cross-database queries, but we would lose
the security isolation of databases (connection control. private system
tables and extensions) to do that.  Having per-database WAL causes loss
of performance, reliability issues, and architectural complexity.  Those
problems might be solvable, but you will need to take a hit in one of
these areas.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Index no longer being used, destroying and recreating it restores use.

2020-06-15 Thread Bruce Momjian
On Tue, Jun  9, 2020 at 02:23:51PM +0200, Koen De Groote wrote:
> Right. In that case, the function I ended up with is this:
> 
> create or replace function still_needs_backup(bool, bool)
> returns BOOLEAN as $$
> BEGIN                
>     PERFORM 1 from item where shouldbebackedup=$1 and backupperformed=$2;
>       IF FOUND THEN
>         RETURN TRUE;
>       ELSE
>         RETURN FALSE;
>       END IF;
> END;
> $$
> language plpgsql;
> 
> 
> And the index for that should be this, I think: "CREATE INDEX CONCURRENTLY
> index_test ON item USING btree (still_needs_backup, itemCreated, filepath)
> WHERE still_needs_backup(true, false) = true;"
> However postgres throws an error here, saying "ERROR:  functions in index
> predicate must be marked IMMUTABLE".
> 
> I tried it also without the first argument, same error.
> 
> And I don't think I can do that, because the return is not IMMUTABLE. It is at
> best STABLE, but certainly not IMMUTABLE.
> 
> So yeah, I'm probably not understanding the suggestion properly? Either way, I
> still have questions about the earlier function I created, namely how reliable
> that performance is. If not the same thing will happen as with the re-created
> index.

I think you need to look at EXPLAIN ANALYZE and see how close the
estimate is from the actual counts for various stages.  The original
query had these quite different, leading to misestimation and wrong
plans.  If the new EXPLAIN ANALYZE has estimates closer to actual, the
problem should not reappear.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Move configuration files with pg_upgrade

2020-06-15 Thread Bruce Momjian
On Thu, Jun  4, 2020 at 03:26:07PM -0700, Adrian Klaver wrote:
> On 6/4/20 12:52 PM, Tanja Savic wrote:
> > Hello,
> > 
> > I did Postgresql upgrade from v10 to v12 using pg_upgrade (on Linux
> > Ubuntu server).
> > Of course i wanted settings inpg_hba.conf and postgresql.conf to keep,
> > but after the upgrade there were new configuration files and I moved it
> > manually.
> 
> That is documented:
> 
> https://www.postgresql.org/docs/12/pgupgrade.html
> 
> "
> Restore pg_hba.conf
> 
> If you modified pg_hba.conf, restore its original settings. It might also be
> necessary to adjust other configuration files in the new cluster to match
> the old cluster, e.g. postgresql.conf.
> "
> 
> The new version's conf files  will probably have new settings available so
> this is something that should be reviewed before moving over. One way to
> deal with this is include file(s):
> 
> https://www.postgresql.org/docs/12/config-setting.html#CONFIG-INCLUDES
> 
> Using them you create the settings you want to manage in the include file.
> Then copy that over and set the include 'some_file' in the new
> postgresql.conf and you are good.

Yes, the community instructions require you to reconfigure the new
server to match the old one.  Some packagers who automate pg_upgrade
might do that configuration migration automatically.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Should I enforce ssl/local socket use?

2020-06-15 Thread Bruce Momjian
On Sun, Jun  7, 2020 at 10:32:39AM +1000, Tim Cross wrote:
> 
> Michel Pelletier  writes:
> 
> > Hello,
> >
> > I'm the author of the pgsodium cryptography library.  I have a question
> > about a best practice I'm thinking of enforcing.  Several functions in
> > pgsodium generate secrets, I want to check the Proc info to enforce that
> > those functions can only be called using a local domain socket or an ssl
> > connection.  If the connection isn't secure by that definition, secret
> > generating functions will fail.
> >
> > If someone really wants to point the gun at their foot, they can connect
> > with an unsecured proxy.  My goal would be to make bypassing the check
> > annoying.
> >
> > Any thoughts?  Is this an insufferably rude attitude?  Are there scenarios
> > where one can foresee needing to generate secrets not over ssl or a domain
> > socket?
> >
> 
> I'm never very fond of enforcing a particular behaviour as it assumes we
> understand all environments and use cases. Far better to make this the
> default behaviour, but allow users to disable it if they want and
> clearly document that option as insecure. I also suspect that without
> the ability to somehow disable the checks, people will find elaborate
> ways to work around them which are almost certainly going to be even
> worse from a security perspective. 

You also have to allow a way to disable it that is secure or it is
useless, which makes it even more complex.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Index no longer being used, destroying and recreating it restores use.

2020-06-16 Thread Bruce Momjian
On Tue, Jun 16, 2020 at 11:49:15AM +0200, Koen De Groote wrote:
> Alright, I've done that, and that seems to be a very good result: https://
> explain.depesz.com/s/xIph
> 
> The method I ended up using:
> 
> create or replace function still_needs_backup(shouldbebackedup bool,
> backupperformed bool)
> returns BOOLEAN as $$
>    select $1 AND NOT $2;
> $$
> language sql immutable;
> 
> And the index is as suggested.
> 
> It seems the amount of rows we end up with has improved.
> 
> Thank you for your help. I wasn't aware functions could interact with indexes
> in such a manner.

This blog entry explains how statistics on expression/function indexes
can help:

https://momjian.us/main/blogs/pgblog/2017.html#February_20_2017

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: i am getting issue as FATAL: password authentication failed for user ' postgres' tried

2020-06-23 Thread Bruce Momjian
On Sat, Jun 20, 2020 at 06:28:52PM +0200, Peter J. Holzer wrote:
> On 2020-06-17 20:00:51 -0700, Adrian Klaver wrote:
> > On 6/17/20 7:14 PM, prachi surangalikar wrote:
> > > i have  tried every  thing  but still i  could not find the solution to
> > > this problem.
> > > i made changes in the pg_hba.conf file also , please help me to solve
> > > this problem.
> > 
> > Should have added to previous post:
> > 
> > Are you sure that you are using the correct password or that the 'postgres'
> > user has a password?
> 
> And that the OP is indeed using the 'postgres' user and not the ' postgres'
> user (as she wrote in the subject).

Uh, how are those different?

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: PostGreSQL TDE encryption patch

2020-06-25 Thread Bruce Momjian
On Thu, Jun 25, 2020 at 04:20:06PM +0530, Bhalodiya, Chirag wrote:
> Hi Patrick,
> 
> Thanks for the information. I was looking for out of box postgre solution so
> wanted to know how to apply following patch on top of my postgre 12
> installation:
> https://www.postgresql.org/message-id/
> CAD21AoBjrbxvaMpTApX1cEsO%3D8N%3Dnc2xVZPB0d9e-VjJ%3DYaRnw%40mail.gmail.com  

The patch is for developers and not for production use.

---


> 
> 
> Regards,
> Chirag.  
> 
> On Thu, Jun 25, 2020 at 3:33 PM Patrick FICHE 
> wrote:
> 
> 
> Hi
> 
>  
> 
> CYBERTEC provided good installation guide (https://
> www.cybertec-postgresql.com/en/products/
> postgresql-transparent-data-encryption/).
> 
>  
> 
> Here is their answer to your question :
> 
> Q: Can I upgrade to an encrypted database?
> A: In place encryption of existing clusters is currently not supported. A
> dump and reload to an encrypted instance is required, or logical
> replication can be used to perform the migration online.
> 
>  
> 
> Regards,
> 
>  
> 
> Patrick Fiche
> 
> Database Engineer, Aqsacom Sas.
> 
> c. 33 6 82 80 69 96
> 
>  
> 
> 01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg
> 
>  
> 
> From: Bhalodiya, Chirag 
> Sent: Thursday, June 25, 2020 9:50 AM
> To: pgsql-gene...@postgresql.org
> Subject: PostGreSQL TDE encryption patch
> 
>  
> 
> Hi,
> 
>  
> 
> We are migrating our product to PostGreSQL from Oracle and as part of 
> HIPPA
> (https://en.wikipedia.org/wiki/
> Health_Insurance_Portability_and_Accountability_Act) guidelines, we have a
> requirement to encrypt entire tablespace/specific tables using Transparent
> data encryption(TDE).
> 
>  
> 
> I was looking at TDE solution in PostGreSQL and went through following
> wiki:
> 
> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption 
> 
>  
> 
> I found following TDE patch from this wiki:  
> 
> https://www.postgresql.org/message-id/
> CAD21AoBjrbxvaMpTApX1cEsO%3D8N%3Dnc2xVZPB0d9e-VjJ%3DYaRnw%40mail.gmail.com
>   
> 
>  
> 
> However, I am not sure how to apply this patch and I had the
> following questions:
> 
> 1. We are using PostGreSQL 12. Is it possible to apply patches on top of
> existing PostGreSQL installation?
> 
> 2. Will it be available anytime sooner with a major release like 
> PostGreSQL
> 13? 
> 
>  
> 
> Regards,
> 
> Chirag.
> 



-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: PostgreSQL 12 - ERROR: could not rename file "pg_logical/snapshots/1A-7C00D890.snap.13594.tmp" to "pg_logical/snapshots/1A-7C00D890.snap": No space left on device

2020-06-30 Thread Bruce Momjian
On Tue, Jun 30, 2020 at 01:16:58PM -0700, Adrian Klaver wrote:
> On 6/30/20 11:03 AM, FOUTE K. Jaurès wrote:
> > Hi everyone,
> > 
> > I am use a native logical replication in PostgreSQL 12 on Ubuntu 16.04
> > and all is working fine until i faced this error below.
> 
> From below it looks like you are using pg_logical which is a third party
> package. It is what the builtin logical replication is derived from, but it
> is not the same thing. So what version of pg_logical are you using?
> 
> > 
> > ERROR:  could not rename file
> > "pg_logical/snapshots/1A-7C00D890.snap.13594.tmp" to
> > "pg_logical/snapshots/1A-7C00D890.snap": No space left on device
> > 
> > Is there someone who faced this problem?
> > Any idea how I can solve it ?
> > 
> > BTW: I don't have any problem with space.
> 
> The error says you do.
> Where is pg_logical/snapshots/ mounted?
> Are there specific restrictions on that mount?

I would also look at your kernel log.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: PostgreSQL 12 - ERROR: could not rename file "pg_logical/snapshots/1A-7C00D890.snap.13594.tmp" to "pg_logical/snapshots/1A-7C00D890.snap": No space left on device

2020-06-30 Thread Bruce Momjian
On Tue, Jun 30, 2020 at 10:03:52PM +0100, FOUTE K. Jaurès wrote:
> Le mar. 30 juin 2020 à 21:23, Bruce Momjian  a écrit :
> 
> On Tue, Jun 30, 2020 at 01:16:58PM -0700, Adrian Klaver wrote:
> > On 6/30/20 11:03 AM, FOUTE K. Jaurès wrote:
> > > Hi everyone,
> > >
> > > I am use a native logical replication in PostgreSQL 12 on Ubuntu 16.04
> > > and all is working fine until i faced this error below.
> >
> > From below it looks like you are using pg_logical which is a third party
> > package. It is what the builtin logical replication is derived from, but
> it
> > is not the same thing. So what version of pg_logical are you using?
> >
> > >
> > > ERROR:  could not rename file
> > > "pg_logical/snapshots/1A-7C00D890.snap.13594.tmp" to
> > > "pg_logical/snapshots/1A-7C00D890.snap": No space left on device
> > >
> > > Is there someone who faced this problem?
> > > Any idea how I can solve it ?
> > >
> > > BTW: I don't have any problem with space.
> >
> > The error says you do.
> > Where is pg_logical/snapshots/ mounted?
>     > Are there specific restrictions on that mount?
> 
> I would also look at your kernel log.
> 
> image.png

OK, are you using SAN/iSCSI or NAS, which uses Ethernet to communicate
with the storage?

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Transaction control in SECURITY DEFINER procedures

2020-07-22 Thread Bruce Momjian
On Tue, Jul  7, 2020 at 12:13:42PM +0100, Chris Sterritt wrote:
> The documentation for CREATE PROCEDURE informs us "A SECURITY DEFINER 
> procedure
> cannot execute transaction control statements (for example, COMMIT and 
> ROLLBACK
> , depending on the language)."
> 
> Can anyone let me know why this is so and are there any plans to remove this
> restriction in future releases?

I have a reproducible case:

CREATE OR REPLACE PROCEDURE transcheck () AS $$
BEGIN
PERFORM 1;
COMMIT;
END;
$$ LANGUAGE plpgsql;

CALL transcheck ();

ALTER PROCEDURE transcheck SECURITY DEFINER;

CALL transcheck ();
--> ERROR:  invalid transaction termination
--> CONTEXT:  PL/pgSQL function transcheck() line 4 at COMMIT

and this is the reason:

commit 3884072329 Author: Peter Eisentraut 
Date:   Wed Jul 4 09:26:19 2018 +0200

Prohibit transaction commands in security definer procedures

Starting and aborting transactions in security definer
procedures doesn't work.  StartTransaction() insists that
the security context stack is empty, so this would currently
cause a crash, and AbortTransaction() resets it.  This could
be made to work by reorganizing the code, but right now we
just prohibit it.

Reported-by: amul sul  Discussion:

https://www.postgresql.org/message-id/flat/CAAJ_b96Gupt_LFL7uNyy3c50-wbhA68NUjiK5%3DrF6_w%3Dpq_T%3DQ%40mail.gmail.com

so, yes, it is possible, but no one has implemented it.  This is the
first complaint I have heard about this.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Doubt in mvcc

2020-07-22 Thread Bruce Momjian
On Mon, Jul 13, 2020 at 10:41:28AM +0200, Francisco Olarte wrote:
> Rama:
> 
> On Mon, Jul 13, 2020 at 9:52 AM Rama Krishnan  wrote:
> > I m preparing for interview one of the recruiter asked me mvcc drawbacks as 
> > i told due to mvcc it use more space and need to perform maintenance 
> > activity.
> > Another one is the same data causes an update conflict because two 
> > different transactions can update the same version of the row.
> >  he told its wrong, kindly tell me will you please tell me its correct or 
> > wrong?
> 
> I'm not sure I understand your question too well, you may want to
> refresh/expand.
> 
> One interpretation is, on a pure MVCC contest, two transactions, say 5
> and 6, could try to update a tuple valid for [1,) and end up
> generating two new tuples, [5,), [6,) and closing the original at
> either [1,5) or [1,6) .
> 
> That's why MVCC is just a piece, locking is other. On a MVCC the
> tuples are locked while a transaction manipulates them. Other
> transactions may read them, which is why readers do not block writers,
> but two updates on the same tuple serialize.

You might want to look at this:

https://momjian.us/main/presentations/internals.html#mvcc

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_dump / pg_restore version confusion

2020-07-29 Thread Bruce Momjian
On Wed, Jul 29, 2020 at 10:53:34AM +, Zwettler Markus (OIZ) wrote:
> Hi,
> 
>  
> 
> An external supplier had a postgres v9.5 database which he dumped with a
> pg_dump v12 client in custom format using PgAdmin4.
> 
>  
> 
> Would a pg_restore with a v12 client into a postgres v9.6 database work and be
> officially supported?

Yes, you can always use a newer pg_dump on an older database, though the
reverse is not recommended.  In fact, if you are upgrading to PG 12, it
is recommended to use pg_dump v12 to dump a Postgres database from an
earlier version.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_dump / pg_restore version confusion

2020-07-29 Thread Bruce Momjian
On Wed, Jul 29, 2020 at 12:33:56PM +, Zwettler Markus (OIZ) wrote:
> And I can also do this restore:
> 
> 
> Would a pg_restore with a v12 client into a postgres v9.6 database work and 
> be officially supported?
> 

Uh, good question.  You should still use the version of pg_restore that
you are loading _into_, not what you dumped from.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Can PAF be used to provide zero downtime while primary and backup servers are being patched?

2020-08-06 Thread Bruce Momjian
On Thu, Aug  6, 2020 at 10:01:21AM -0400, David Gauthier wrote:
> Our IT dept needs to install a patch on both primary and backup servers for 
> our
> Postgres Automatic Failover configured DB (version 9.6 on linux).  From the
> standpoint of the DB users, can a strategy be implemented such that they see
> zero downtime during this process as the 2 servers are taken down in sequence
> for their patch, switching the role of primary and backup as this happens ?

Assuming this is a minor relase upgrade, you can do it by just bouncing
the servers.  You can do the switch-over/switch-back too, but you have
to manage session migration.  I wrote a blog about it:

https://momjian.us/main/blogs/pgblog/2018.html#October_1_2018

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Numeric data types

2020-08-31 Thread Bruce Momjian
On Mon, Aug 31, 2020 at 04:38:05PM +, Godfrin, Philippe E wrote:
> Greetings Listees (frist time poster!)
> 
>  
> 
> Considering the difference between 64 bit and 32 bit numeric datatypes. We 
> have
> source databases that are running 32 bit. They send their data to a larger
> cluster that is running 64 bit. Should there be something special done in 
> order
> to accommodate the difference?

How is the data sent?  In almost every case, the translation should work
fine.  I think 32-bit overflow should be your only concern here.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: [EXTERNAL] Re: Numeric data types

2020-08-31 Thread Bruce Momjian
On Mon, Aug 31, 2020 at 05:32:23PM +, Godfrin, Philippe E wrote:
> Thanks Bruce, I suppose you mean n32 -> n64 OK, but n64->n32 chance of 32-bit
> overflow…

Uh, I was thinking that your 32-bit values could roll over and the other
end would be confused, but I can also see the 64-bit going above the
32-bit range.  As long as everything stays < 32-bits, you should be
fine.  We don't transfer binary values very often.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: [EXTERNAL] Re: Numeric data types

2020-08-31 Thread Bruce Momjian
On Mon, Aug 31, 2020 at 10:14:48PM +, Godfrin, Philippe E wrote:
> Fabulous, thanks much.

You still have not told us how you are transfering the data, so we can
be sure.

---

> 
> From: Bruce Momjian 
> Sent: Monday, August 31, 2020 4:56 PM
> To: Godfrin, Philippe E 
> Cc: pgsql-gene...@postgresql.org
> Subject: Re: [EXTERNAL] Re: Numeric data types
> 
>  
> 
> On Mon, Aug 31, 2020 at 05:32:23PM +, Godfrin, Philippe E wrote:
> > Thanks Bruce, I suppose you mean n32 -> n64 OK, but n64->n32 chance of 
> > 32-bit
> > overflow…
> 
> Uh, I was thinking that your 32-bit values could roll over and the other
> end would be confused, but I can also see the 64-bit going above the
> 32-bit range. As long as everything stays < 32-bits, you should be
> fine. We don't transfer binary values very often.
> 
> --
> Bruce Momjian  https://momjian.us
> EnterpriseDB https://enterprisedb.com
> 
> The usefulness of a cup is in its emptiness, Bruce Lee
> 

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: [EXTERNAL] Re: Numeric data types

2020-08-31 Thread Bruce Momjian
On Mon, Aug 31, 2020 at 10:20:51PM +, Godfrin, Philippe E wrote:
> Frankly, I’m not certain, I believe the developers are using a messaging
> intermediary.

Oh, well, then you would need to find out if they are transfering the
value via text or binary.  It is kind of hard to extract query results
as binary, but it is technically possible with binary COPY or triggers.

---

> 
> pg
> 
>  
> 
> From: Bruce Momjian 
> Sent: Monday, August 31, 2020 5:19 PM
> To: Godfrin, Philippe E 
> Cc: pgsql-gene...@postgresql.org
> Subject: Re: [EXTERNAL] Re: Numeric data types
> 
>  
> 
> On Mon, Aug 31, 2020 at 10:14:48PM +, Godfrin, Philippe E wrote:
> > Fabulous, thanks much.
> 
> You still have not told us how you are transfering the data, so we can
> be sure.
> 
> -------
> 
> >
> > From: Bruce Momjian 
> > Sent: Monday, August 31, 2020 4:56 PM
> > To: Godfrin, Philippe E 
> > Cc: pgsql-gene...@postgresql.org
> > Subject: Re: [EXTERNAL] Re: Numeric data types
> >
> >
> >
> > On Mon, Aug 31, 2020 at 05:32:23PM +, Godfrin, Philippe E wrote:
> > > Thanks Bruce, I suppose you mean n32 -> n64 OK, but n64->n32 chance of
> 32-bit
> > > overflow…
> >
> > Uh, I was thinking that your 32-bit values could roll over and the other
> > end would be confused, but I can also see the 64-bit going above the
> > 32-bit range. As long as everything stays < 32-bits, you should be
> > fine. We don't transfer binary values very often.
> >
> > --
> > Bruce Momjian  https://momjian.us
> > EnterpriseDB https://enterprisedb.com
> >
> > The usefulness of a cup is in its emptiness, Bruce Lee
> >
> 
> --
> Bruce Momjian  https://momjian.us
> EnterpriseDB https://enterprisedb.com
> 
> The usefulness of a cup is in its emptiness, Bruce Lee
> 

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-20 Thread Bruce Momjian
On Mon, Sep 14, 2020 at 05:39:57PM -0400, Tom Lane wrote:
> tutilu...@tutanota.com writes:
> > 1. All non-ANSI characters are turned into "?"s for application_name.
> 
> Yeah, that's hard to do much with unfortunately.  We cannot assume that
> all databases in an installation share the same encoding, so for globally
> visible strings like application_name, the only safe solution is to
> restrict them to ASCII.
> 
> On the other hand, the very same thing could be said of database names
> and role names, yet we have never worried much about whether those were
> encoding-safe when viewed from databases with different encodings, nor
> have there been many complaints about the theoretical unsafety.  So maybe
> this is just overly anal-retentive and we should drop the restriction,
> or at least pass through data that doesn't appear to be invalidly
> encoded.

I think the issue is that role and database names are controlled by
privileged users, while application_name is not.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-20 Thread Bruce Momjian
On Mon, Sep 14, 2020 at 10:22:31PM +0200, tutilu...@tutanota.com wrote:
> 4. There is no built-in means to have PG manage (or even suggest) indexes on
> its own. Trying to figure out what indexes to create/delete/fine-tune, and
> determine all the extremely complex rules for this art (yes, I just called
> index management an *art*, because it is!), is just utterly hopeless to me. It
> never gets any easier. Not even after many years. It's the by far worst part 
> of
> databases to me (combined with point five). Having to use third-party 
> solutions
> ensures that it isn't done in practice, at least for me. I don't trust, nor do
> I want to deal with, external software and extensions in my databases. I still
> have nightmares from PostGIS, which I only keep around, angrily, out of
> absolute necessity. I fundamentally don't like third-party add-ons to things,
> but want the core product to properly support things. Besides, this (adding/
> managing indexes) is not even some niche/obscure use-case, but something which
> is crucial for basically any nontrivial database of any kind!

I think you did a very good job of explaining your issues.  I think the
underlying problem is that Postgres is targeting a wide market, and your
use-case for a more limited or self-contained database doesn't fit many
of those markets.  Also, PostGIS is one of the most complex extensions,
so adding simpler ones should not be as hard.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: multiple tables got corrupted

2020-09-24 Thread Bruce Momjian
On Tue, Sep 15, 2020 at 07:58:39PM +0200, Magnus Hagander wrote:
> Try reading them "row by row" until it breaks. That is, SELECT * FROM ... 
> LIMIT
> 1, then LIMIT 2 etc. For more efficiency use a binary search starting at what
> seems like a reasonable place looking at the size of the table vs the first
> failed block to make it faster, but the principle is the same. Once it fails,
> you've found a corrupt block...

You can also include the invisible 'ctid' column so you can see the
block number of each row, e.g.:

SELECT ctid, relname FROM pg_class LIMIT 2;
  ctid  |   relname
+--
 (0,46) | pg_statistic
 (0,47) | pg_type

The format is page number, item number on page.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-24 Thread Bruce Momjian
On Sun, Sep 20, 2020 at 01:15:26PM -0400, Tom Lane wrote:
> Bruce Momjian  writes:
> > On Mon, Sep 14, 2020 at 05:39:57PM -0400, Tom Lane wrote:
> >> On the other hand, the very same thing could be said of database names
> >> and role names, yet we have never worried much about whether those were
> >> encoding-safe when viewed from databases with different encodings, nor
> >> have there been many complaints about the theoretical unsafety.  So maybe
> >> this is just overly anal-retentive and we should drop the restriction,
> >> or at least pass through data that doesn't appear to be invalidly
> >> encoded.
> 
> > I think the issue is that role and database names are controlled by
> > privileged users, while application_name is not.
> 
> That's certainly an argument against a completely laissez-faire approach,
> but if we filtered invalidly-encoded data on the reading side, it seems
> like we would be in good enough shape.

Yes, if we want to filter, sure.  I thought we were not 100% able to
filter, but I guess if it safe, we can do it.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-24 Thread Bruce Momjian
On Wed, Sep 23, 2020 at 12:28:14AM +0200, tutilu...@tutanota.com wrote:
> Sep 21, 2020, 7:53 PM by j...@commandprompt.com:
> See my comment about Google. The information is out there and easy to 
> find.
> 
> I guess I'm the worst idiot in the world, then, who can't DuckDuckGo (Google 
> is
> evil) it even after 15 years.
> 
> Seriously, I didn't type my feedback "for fun". It may be difficult for very
> intelligent people to understand (as often is the case, because you operate on
> a whole different level), but the performance-related PostgreSQL configuration
> options are a *nightmare* to me and many others. I spent *forever* reading
> about them and couldn't make any sense of it all. Each time I tried, I would
> give up, frustrated and angry, with no real clue what "magic numbers" it
> wanted.
> 
> It's quite baffling to me how this can be so difficult for you all to
> understand. Even if we disregard the sheer intelligence factor, it's clear 
> that
> users of PG don't have the same intimate knowledge of PG's internals as the PG
> developers, nor could possibly be expected to.
> 
> As mentioned, I kept going back to the default configuration over and over
> again. Anyone who doesn't is either a genius or pretends/thinks that they
> understand it. (Or I'm extremely dumb.)

I think there is a clear dependency that people reading the docs,
particularly for performance purposes, must have an existing knowledge
of a lot of low-level things --- this could be the cause of your
frustration.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-25 Thread Bruce Momjian
On Thu, Sep 24, 2020 at 11:47:10PM -0500, Ron wrote:
> On 9/24/20 6:20 PM, Bruce Momjian wrote:
> > On Wed, Sep 23, 2020 at 12:28:14AM +0200, tutilu...@tutanota.com wrote:
> > > Sep 21, 2020, 7:53 PM by j...@commandprompt.com:
> > >  See my comment about Google. The information is out there and easy 
> > > to find.
> > > 
> > > I guess I'm the worst idiot in the world, then, who can't DuckDuckGo 
> > > (Google is
> > > evil) it even after 15 years.
> > > 
> > > Seriously, I didn't type my feedback "for fun". It may be difficult for 
> > > very
> > > intelligent people to understand (as often is the case, because you 
> > > operate on
> > > a whole different level), but the performance-related PostgreSQL 
> > > configuration
> > > options are a *nightmare* to me and many others. I spent *forever* reading
> > > about them and couldn't make any sense of it all. Each time I tried, I 
> > > would
> > > give up, frustrated and angry, with no real clue what "magic numbers" it
> > > wanted.
> > > 
> > > It's quite baffling to me how this can be so difficult for you all to
> > > understand. Even if we disregard the sheer intelligence factor, it's 
> > > clear that
> > > users of PG don't have the same intimate knowledge of PG's internals as 
> > > the PG
> > > developers, nor could possibly be expected to.
> > > 
> > > As mentioned, I kept going back to the default configuration over and over
> > > again. Anyone who doesn't is either a genius or pretends/thinks that they
> > > understand it. (Or I'm extremely dumb.)
> > I think there is a clear dependency that people reading the docs,
> > particularly for performance purposes, must have an existing knowledge
> > of a lot of low-level things --- this could be the cause of your
> > frustration.
> 
> And that's a serious problem with the documentation. (Not that I know how to
> fix it in an OSS project.)

We added a glossary in PG 13, so we could certainly have some kind of
hardware terms primer which explains various OS features that affect
Postgres.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-09-25 Thread Bruce Momjian
On Fri, Sep 25, 2020 at 10:04:53AM +0200, Thomas Kellerer wrote:
> I am not in the USA (and far from being US-centric as well) and I have been 
> working
> with relational databases for over thirty years. I never had problems using 
> unquoted
> ASCII names (using snake_case) for my database objects.
> 
> Would it be nice if I could use special characters like öäü in the names of 
> tables and columns (without the hassle of quoting them)?
> Yes, absolutely.
> 
> Does not using them, limit me in any way doing what I want to do?
> No, it doesn't.
> 
> > Always the same thing. The slightest criticism, no matter how
> > warranted, always results in: "Fine. Go somewhere else. Use something
> > else."
> > Never: "Oh, right. Sorry, but we always used lowercase ourselves and
> > therefore didn't consider this. In retrospect, it's an embarrassing
> > mistake! We'll fix it in the next release. Thanks for pointing that
> > out."
> 
> Changing this behaviour has been discussed on this list multiple times.
> 
> The problem is, that currently no one sees a way to change this without
> breaking all (or most) existing code that relies on it (at least not with
> the resources the project hast).
> 
> It's my understanding (as a user, not a developer) that the impact on the 
> code base would be huge, and
> the community doesn't indeed really see a benefit in doing so.
> 
> It has especially been discussed to implement a behaviour that complies
> with the SQL standard which *requires* to fold non-quoted names to uppercase!

I did write a blog entry about case folding:

https://momjian.us/main/blogs/pgblog/2020.html#June_26_2020

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Restoring a database problem

2020-09-30 Thread Bruce Momjian
On Thu, Oct  1, 2020 at 01:00:21PM +1300, Glen Eustace wrote:
> I have had to do this so rarely and it has almost always been in a bit of a
> panic so may well be missing something really obvious.
> 
> What I want to know is how to quiese a database to that I can restore it.
> 
> I need to close all existing connections and the prevent people/processes from
> connecting again until the restore has completed.
> 
> Currently I have been logging into a bunch of servers and stopping various
> daemons, then on the database server killing processes until the database is
> apparently idle then dropping the database and doing the restore. Then
> restarting the daemons etc. I am sure I am not doing this the right way so
> advice gratefully received.

I would modify pg_hba.conf to block access temporarily.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade issue upgrading 10 -> 13

2020-10-01 Thread Bruce Momjian
On Thu, Oct  1, 2020 at 05:28:53PM +0200, Thomas Kellerer wrote:
> 
> 
> Sean Brown schrieb am 01.10.2020 um 16:51:
> > I’m having a little problem using pg_upgrade to move from 10 to 13,
> > I’m assuming the issue is related to the removal of pg_pltemplate,
> > but I can’t find anything related to how to handle it.
> > 
> > pg_upgrade —check reports that the clusters are compatible, but the
> > actual upgrade fails starting the new cluster with -
> > 
> > ERROR: relation “pg_catalog.pg_pltemplate” does not exist STATEMENT:
> > GRANT SELECT ON TABLE “pg_catalog"."pg_pltemplate” TO "appuser”;
> > 
> > Is there a way to deal with this that doesn’t include dropping the
> > source table?
> > 
> 
> Does revoking the privilege before running pg_upgrade help?
> 
> So in the v10 database:
> 
>   revoke select on pg_catalog.pg_pltemplate from appuser;

Yeah, there must be a reference to pg_catalog.pg_pltemplate somewhere
that was missed.  I think a simple dump/restore would also error on the
restore, but a normal restore might ignore the error, while pg_upgrade
will not.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade issue upgrading 10 -> 13

2020-10-01 Thread Bruce Momjian
On Thu, Oct  1, 2020 at 09:03:31AM -0700, Adrian Klaver wrote:
> On 10/1/20 7:51 AM, Sean Brown wrote:
> > I’m having a little problem using pg_upgrade to move from 10 to 13, I’m 
> > assuming the issue is related to the removal of pg_pltemplate, but I can’t 
> > find anything related to how to handle it.
> > 
> > pg_upgrade —check reports that the clusters are compatible, but the actual 
> > upgrade fails starting the new cluster with -
> > 
> > ERROR: relation “pg_catalog.pg_pltemplate” does not exist
> > STATEMENT: GRANT SELECT ON TABLE “pg_catalog"."pg_pltemplate” TO "appuser”;
> 
> Which points to the dangers of doing things to the system tables. They can
> change/disappear between major versions.

And pg_dump (used by pg_upgrade) had little handling for such changes.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: New "function tables" in V13 documentation

2020-11-11 Thread Bruce Momjian
On Mon, Nov  9, 2020 at 07:46:21PM -0600, Merlin Moncure wrote:
> On Sun, Nov 8, 2020 at 3:57 PM Thomas Kellerer  wrote:
> >
> > In case someone is interested: there is a little discussion going on on 
> > Reddit whether the new format of presenting functions in V13 is a step 
> > backwards:
> >
> >
> > https://www.reddit.com/r/PostgreSQL/comments/jpi0rp/does_anyone_else_feel_like_the_v13_docs_are_a/
> 
> It's more than a little ironic that reddit's "old" format (still
> visible via old.reddit.com) is objectively clearer and better along
> exactly the same lines.

I think it is funny that the Redit thread thinks we made the format
change because of mobile, but it was actually more for PDF output, which
is more old-school than even web pages.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Christopher Browne

2020-11-12 Thread Bruce Momjian
On Wed, Nov  4, 2020 at 06:29:18PM -0500, Steve Singer wrote:
> 
> It is with much sadness that I am letting the community know that Chris
> Browne passed away recently.
> 
> Chris had been a long time community member and was active on various
> Postgresql mailing lists.  He was a member of the funds committee, the PgCon
> program committee and served on the board of the PostgreSQL Community
> Association of Canada. Chris was a maintainer of the Slony replication
> system and worked on various PG related tools.
> 
> I worked with Chris for over 10 years and anytime someone had a problem he
> would jump at the chance to help and propose solutions. He always had time
> to listen to your problem and offer ideas or explain how something worked.

FYI, the funeral service is online today at 11am US/Eastern time (in one
hour):

https://youtu.be/hOgMwmFYJM4

The URL was listed on the web page of the obituary:


https://www.arbormemorial.ca/capital/obituaries/christopher-bruce-browne/57436/

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Encryption with customer provided key in a multi tenant Postgres JSONB DB

2020-11-12 Thread Bruce Momjian
On Thu, Nov 12, 2020 at 11:57:27AM +0530, Saurav Sarkar wrote:
> Hi David,
> 
> Thanks for the reply.
> 
> I just wanted to check if there is any possibility or any activity ongoing
> which can enable database or fine granular level encryption in future.
> 
> Probably then i can wait otherwise i have to move towards Client Side
> encryption as you mentioned.

Postgres shared the same WAL files for all databases in a cluster, so
the idea of having multiple keys for different users is very hard or
impossible to implement.  Client-side is much better for this use-case.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Issue upgrading from 9.5 to 13 with pg_upgrade: "connection to database failed: FATAL: database "template1" does not exist"

2020-11-13 Thread Bruce Momjian
On Fri, Nov 13, 2020 at 12:06:34PM -0500, Jeremy Wilson wrote:
> 
> 
> > On Nov 13, 2020, at 12:00 PM, Tom Lane  wrote:
> > 
> > Are you by any chance trying to preload any of the postgis-related
> > extensions?  If so, try not doing that.
> 
> Not sure what you mean by this - I’ve installed the postgis packages for 9.5 
> and 13 and the extensions are installed and working in 9.5, but I’m not doing 
> anything but initdb and then pg_upgrade for 13.

I think he is asking about shared_preload_libraries,
local_preload_libraries, and session_preload_libraries.  Also, try
running this query and show us what is not the default:

SELECT name, current_setting(name), source
FROM pg_settings
WHERE source NOT IN ('default', 'override');

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade of 11 -> 13: free(): invalid pointer

2020-11-17 Thread Bruce Momjian
On Tue, Nov 17, 2020 at 11:59:10AM -0500, Jeremy Wilson wrote:
> pg_restore: WARNING:  terminating connection because of crash of another 
> server process
> DETAIL:  The postmaster has commanded this server process to roll back the 
> current transaction and exit, because another server process exited 
> abnormally and possibly corrupted shared memory.
> HINT:  In a moment you should be able to reconnect to the database and repeat 
> your command.
> pg_restore: creating COMMENT "public.FUNCTION "st_isempty"("rast" 
> "public"."raster")"
> pg_restore: while PROCESSING TOC:
> pg_restore: from TOC entry 5338; 0 0 COMMENT FUNCTION "st_isempty"("rast" 
> "public"."raster") postgres
> pg_restore: error: could not execute query: server closed the connection 
> unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> Command was: COMMENT ON FUNCTION "public"."st_isempty"("rast" 
> "public"."raster") IS 'args: rast - Returns true if the raster is empty 
> (width = 0 and height = 0). Otherwise, returns false.’;

My guess is that this is a crash in the PostGIS shared library.  I would
ask the PostGIS team if they know of any crash cases, and if not, I
think you need to do a pg_dump of the database and test-load it into a
new database to see what query makes it fail, and then load debug
symbols and do a backtrace of the stack at the point of the crash. 
Yeah, not fun.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade of 11 -> 13: free(): invalid pointer

2020-11-17 Thread Bruce Momjian
On Tue, Nov 17, 2020 at 02:44:47PM -0500, Bruce Momjian wrote:
> On Tue, Nov 17, 2020 at 11:59:10AM -0500, Jeremy Wilson wrote:
> > pg_restore: WARNING:  terminating connection because of crash of another 
> > server process
> > DETAIL:  The postmaster has commanded this server process to roll back the 
> > current transaction and exit, because another server process exited 
> > abnormally and possibly corrupted shared memory.
> > HINT:  In a moment you should be able to reconnect to the database and 
> > repeat your command.
> > pg_restore: creating COMMENT "public.FUNCTION "st_isempty"("rast" 
> > "public"."raster")"
> > pg_restore: while PROCESSING TOC:
> > pg_restore: from TOC entry 5338; 0 0 COMMENT FUNCTION "st_isempty"("rast" 
> > "public"."raster") postgres
> > pg_restore: error: could not execute query: server closed the connection 
> > unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> > Command was: COMMENT ON FUNCTION "public"."st_isempty"("rast" 
> > "public"."raster") IS 'args: rast - Returns true if the raster is empty 
> > (width = 0 and height = 0). Otherwise, returns false.’;
> 
> My guess is that this is a crash in the PostGIS shared library.  I would
> ask the PostGIS team if they know of any crash cases, and if not, I
> think you need to do a pg_dump of the database and test-load it into a
> new database to see what query makes it fail, and then load debug
> symbols and do a backtrace of the stack at the point of the crash. 
> Yeah, not fun.

Actually pg_dump --schema-only is what you want to dump and load into a
separate databsae.   No need to dump the data.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 08:59:58PM +0100, Marcin Giedz wrote:
> 
> > anyway got this from your query:
> 
> > 16402 | plpython_call_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | f
> | f | v | u | 0 | 0 | 2280 | | | | | | | plpython_call_handler | $libdir/
> plpython2 | |
> > 16403 | plpython_inline_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f |
> t | f | v | u | 1 | 0 | 2278 | 2281 | | | | | | plpython_inline_handler |
> $libdir/plpython2 | |
> > 16404 | plpython_validator | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | t | 
> > f
> | v | u | 1 | 0 | 2278 | 26 | | | | | | plpython_validator | $libdir/plpython2
> | |
> 
> Uh-huh, so there you have it.  These must be leftovers from some
> pre-extension incarnation of plpython that was never cleaned up
> properly.  Try
> 
> DROP FUNCTION pg_catalog.plpython_call_handler();
> DROP FUNCTION pg_catalog.plpython_inline_handler(internal);
> DROP FUNCTION pg_catalog.plpython_validator(oid);
> 
> It'll be interesting to see if there are any dependencies.
> 
> regards, tom lane
> 
> -
> 
> BINGO! after drops all went smooth and easy

I think one big problem is that when pg_upgrade fails in this way, users
are required to do some complex system catalog queries to diagnose the
cause.  Is there a way to simplify this for them?

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 03:25:56PM -0500, Tom Lane wrote:
> Bruce Momjian  writes:
> > I think one big problem is that when pg_upgrade fails in this way, users
> > are required to do some complex system catalog queries to diagnose the
> > cause.  Is there a way to simplify this for them?
> 
> Maybe pg_upgrade should print the actual function names, not just the
> probin values.

It is done that way so we don't overwhelm them with lots of function
names, and they can focus on the library.  I was thinking of showing
them a query that would allow them to investigate.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 10:06:17PM +, Devrim Gunduz wrote:
> Hi,
> 
> On Wed, 2020-11-18 at 14:54 -0500, Tom Lane wrote:
> > Uh-huh, so there you have it.  These must be leftovers from some
> > pre-extension incarnation of plpython that was never cleaned up
> > properly. 
> 
> I think this was broken when Marcin first dropped the language. We
> should just have dropped the extension, I guess.

pg_upgrade does have some code to handle plpython call handlers in
function.c:get_loadable_libraries();

 * Systems that install plpython before 8.1 have
 * plpython_call_handler() defined in the "public" schema, causing
 * pg_dump to dump it.  However that function still references
 * "plpython" (no "2"), so it throws an error on restore.  This code
 * checks for the problem function, reports affected databases to the
 * user and explains how to remove them. 8.1 git commit:
 * e0dedd0559f005d60c69c9772163e69c204bac69
 * http://archives.postgresql.org/pgsql-hackers/2012-03/msg01101.php
 * http://archives.postgresql.org/pgsql-bugs/2012-05/msg00206.php

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 09:35:20PM -0700, Rob Sargent wrote:
> > pg_upgrade does have some code to handle plpython call handlers in
> > function.c:get_loadable_libraries();
> > 
> > * Systems that install plpython before 8.1 have
> > * plpython_call_handler() defined in the "public" schema, causing
> > * pg_dump to dump it.  However that function still references
> > * "plpython" (no "2"), so it throws an error on restore.  This code
> > * checks for the problem function, reports affected databases to the
> > * user and explains how to remove them. 8.1 git commit:
> > * e0dedd0559f005d60c69c9772163e69c204bac69
> > * http://archives.postgresql.org/pgsql-hackers/2012-03/msg01101.php
> >     * http://archives.postgresql.org/pgsql-bugs/2012-05/msg00206.php
> > 
> > -- 
> >  Bruce Momjian  https://momjian.us
> >  EnterpriseDB https://enterprisedb.com
> > 
> >  The usefulness of a cup is in its emptiness, Bruce Lee
> > 
> > 
> Unless it stop at prompts for “Yes, ok, I understand.” this could easily fly 
> out the window.

It issues this message and fails:

if (PQntuples(res) > 0)
{
if (!found_public_plpython_handler)
{
pg_log(PG_WARNING,
   "\nThe old cluster has a \"plpython_call_handler\" 
function defined\n"
   "in the \"public\" schema which is a duplicate of 
the one defined\n"
   "in the \"pg_catalog\" schema.  You can confirm this 
by executing\n"
   "in psql:\n"
   "\n"
   "\\df *.plpython_call_handler\n"
   "\n"
   "The \"public\" schema version of this function was 
created by a\n"
   "pre-8.1 install of plpython, and must be removed 
for pg_upgrade\n"
   "to complete because it references a now-obsolete 
\"plpython\"\n"
   "shared object file.  You can remove the \"public\" 
schema version\n"
   "of this function by running the following 
command:\n"
   "\n"
   "DROP FUNCTION public.plpython_call_handler()\n"
   "\n"
   "in each affected database:\n"
           "\n");
}
pg_log(PG_WARNING, "%s\n", active_db->db_name);
found_public_plpython_handler = true;
}
PQclear(res);
}

PQfinish(conn);
}

if (found_public_plpython_handler)
pg_fatal("Remove the problem functions from the old cluster to 
continue.\n");


-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 10:57:00PM -0700, Rob Sargent wrote:
> > It issues this message and fails:
> > 
> >if (PQntuples(res) > 0)
> >{
> >if (!found_public_plpython_handler)
> >{
> >pg_log(PG_WARNING,
> >   "\nThe old cluster has a 
> > \"plpython_call_handler\" function defined\n"
> >   "in the \"public\" schema which is a duplicate of 
> > the one defined\n"
> >   "in the \"pg_catalog\" schema.  You can confirm 
> > this by executing\n"
> >   "in psql:\n"
> >   "\n"
> >   "\\df *.plpython_call_handler\n"
> >   "\n"
> >   "The \"public\" schema version of this function 
> > was created by a\n"
> >   "pre-8.1 install of plpython, and must be removed 
> > for pg_upgrade\n"
> >   "to complete because it references a now-obsolete 
> > \"plpython\"\n"
> >   "shared object file.  You can remove the 
> > \"public\" schema version\n"
> >   "of this function by running the following 
> > command:\n"
> >   "\n"
> >   "DROP FUNCTION 
> > public.plpython_call_handler()\n"
> >   "\n"
> >   "in each affected database:\n"
> >   "\n");
> >}
> >pg_log(PG_WARNING, "%s\n", active_db->db_name);
> >    found_public_plpython_handler = true;
> >}
> >PQclear(res);
> >}
> > 
> >PQfinish(conn);
> >}
> > 
> >if (found_public_plpython_handler)
> >pg_fatal("Remove the problem functions from the old cluster to 
> > continue.\n");
> > 
> > 
> Does this jive with OP’s experience?  Or should it?

It didn't trigger this message for him, and I am also wondering if it
should have.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Restoring Database on Version 11 does not restore database comment

2020-11-20 Thread Bruce Momjian
On Fri, Nov 20, 2020 at 04:01:26PM -0700, David G. Johnston wrote:
> On Fri, Nov 20, 2020 at 3:58 PM George Weaver  wrote:
> 
> What am I missing?
> 
> 
> 
> Release notes.
> 
> https://www.postgresql.org/docs/11/release-11.html
> 
> "pg_dump and pg_restore, without --create, no longer dump/restore
> database-level comments and security labels; those are now treated as
> properties of the database."

Yeah, I realize this new behavior is kind of odd, but logically, it
makes sense.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Performance hit if I create multiple DBs on same instance

2020-11-24 Thread Bruce Momjian
On Tue, Nov 24, 2020 at 10:33:46AM -0500, David Gauthier wrote:
> Ok, thanks. 
> 
> I was also planning on manually running vacuum, reindex and analyze on the 
> main
> DB after removing the data from the main DB after archiving.  Does that sound
> necessary and reasonable ?

This blog entry summarizes the various levels of isolation and their
benefits:

https://momjian.us/main/blogs/pgblog/2012.html#April_23_2012

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Performance hit if I create multiple DBs on same instance

2020-11-25 Thread Bruce Momjian
On Wed, Nov 25, 2020 at 03:59:06PM +0100, Laurenz Albe wrote:
> On Tue, 2020-11-24 at 11:34 -0600, Ron wrote:
> > And if you're afraid of autovacuum and autoanalyze stealing resources, then 
> > disable them (at the table level).
> 
> Ugh, bad advice.
> 
> Better would be to VACUUM (FREEZE) these static table once, then autovacuum
> won't ever perform resource consuming activities on them again.

Yes, also, even if you never do that, autovacuum will eventually freeze
those tables and never access them again.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Is there a good discussion of optimizations?

2020-12-23 Thread Bruce Momjian
On Wed, Dec 23, 2020 at 05:55:57PM -0800, Guyren Howe wrote:
> I’d be interested in any comments anyone has about useful things to discuss
> here, and in particular I’m interested in an accessible discussion of the 
> sorts
> of query optimizations Postgres can do for us.

I have a few talks on this:

https://momjian.us/main/presentations/performance.html#optimizer
https://momjian.us/main/presentations/internals.html#internal_pics

and this webpage:

http://www.interdb.jp/pg/pgsql03.html

Robert Haas has a number of them:

https://sites.google.com/site/robertmhaas/presentations

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Max# of tablespaces

2021-01-05 Thread Bruce Momjian
On Sun, Jan  3, 2021 at 05:37:52PM +, Thomas Flatley wrote:
> Excellent - thanks for the fast response - it was an oracle dba that set it 
> up initially so that may explain it - 

Agreed.  It was probably done that way for an invalid reason and should
be cleaned up.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: What to do with tablespaces when upgrading to pg13 from pg1X?

2021-01-11 Thread Bruce Momjian
On Tue, Jan 12, 2021 at 10:28:52AM +0800, James(王旭) wrote:
> Hello,
> I was planning to upgrade from 12 to 13 using this command:
> 
> /usr/pgsql-13/bin/pg_upgrade -b /usr/pgsql-12/bin/ -B /usr/pgsql-13/bin/ 
> -d
> /data/pg/ -D /pg/pgdata_13/ --jobs=10
> 
> 
> And I got this output:
> 
> Checking for presence of required libraries ok
> Checking database user is the install user  ok
> Checking for prepared transactions  ok
> Checking for new cluster tablespace directories
> new cluster tablespace directory already exists: "/data/tsdb/metas/
> PG_13_202007201"
> Failure, exiting
> 
> I think it's because /data/tsdb was my tablespace dir which was out of the old
> main data directory(/data/pg/).
> 
> So what should I do with old tablespaces when upgrading ?

There should be a subdirectory under your tablespace directory for every
major version, e.g., PG_13_202007201.  I have no idea why your _new_
version already has a directory there.  Do you have a second cluster on
the machine that is using that tablespace directory for PG 13?

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: What to do with tablespaces when upgrading to pg13 from pg1X?

2021-01-11 Thread Bruce Momjian
On Tue, Jan 12, 2021 at 10:50:16AM +0800, James(王旭) wrote:
> You mean PG_13_202007201 was created by PG 13 ? No, there isn't any other
> clusters, maybe it was created by my previous failed attempts of upgrading.

Yes, it had to be created by PG 13 --- it starts with PG_13, followed by
the system catalog version.  Yes, it might be left over.  You have to
delete the old cluster and its tablespaces when you re-try.

> So it seems it should be ok that I issue a command like this 
> 
> find . -name "PG_13*"|xargs rm -rf 
> 
> and then do upgrade again?

I suggest you do more research than just run that --- at least I would.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Do we need a way to moderate mailing lists?

2021-01-16 Thread Bruce Momjian
On Sat, Jan 16, 2021 at 02:50:58PM -0300, Álvaro Herrera wrote:
> On 2021-Jan-16, Hemil Ruparel wrote:
> 
> > Okay. I will not reply to them. Enough mental cycles wasted
> 
> One way you could help, is by learning what top-posting is, learning not
> to do it, and teaching others the same.  Same with not quoting entire
> messages on reply.

That "quoting entire messages on reply" is something I see far too often
here.  I have been meaning to mention this problem.  Thousands of people
are reading postings here, so it pays to take time to trim down what
others have to view.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Do we need a way to moderate mailing lists?

2021-01-16 Thread Bruce Momjian
On Sat, Jan 16, 2021 at 03:34:32PM -0500, Tom Lane wrote:
> Alvaro Herrera  writes:
> > On 2021-Jan-16, Bruce Momjian wrote:
> >> That "quoting entire messages on reply" is something I see far too often
> >> here.  I have been meaning to mention this problem.  Thousands of people
> >> are reading postings here, so it pays to take time to trim down what
> >> others have to view.
> 
> > Yes.  Gmail, by hiding the quoted part of the message, has taught people
> > that it's okay to leave the whole thing in place.  For most of the rest
> > of the world, it's an annoyance.
> 
> Top-posting goes along with that.  The gmail style of top-posting and
> not trimming what's quoted is sort of okay, as long as you don't actually
> need to read any of what's quoted (but then why bother quoting it...)
> 
> The combination of bottom-posting and not trimming what you quoted is
> actually the worst of all possible worlds, because then people are
> forced to scroll through a whole lot of stuff to see what you added.

Agreed.  By telling people not to top-post, we have made it worse in
many cases.

> I see way too many people doing that lately, and to be honest I usually
> stop reading their messages once I see that that's what they did.
> You should only quote enough to remind the reader of what you're
> responding to.

That is also what I do.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: System (and DB) upgrade problem., "part 2"

2021-01-18 Thread Bruce Momjian
On Mon, Jan 18, 2021 at 09:53:33PM +0100, W.P. wrote:
> W dniu 18.01.2021 o 17:19, Laurenz Albe pisze:
> > On Mon, 2021-01-18 at 05:33 +0100, W.P. wrote:
> > > For 9.5 to 9.6 transition, it worked like charm. (except some problems
> > > with parsing json data in a view, but this I will address later, after
> > > upgrade to final version - Fedora 30, probably PG 10.x).
> > > 
> > > Now I have problem with 9.6 -> 10.7 (Fedora 27 -> 28) upgrade.
> > > As expected after system upgrade, database fails to start "files
> > > incompatible with binaries".
> > > 
> > > Found pg_upgrade, command line options "-b
> > > /usr/lib/pgsql/postgresql-9.6/bin -B /usr/bin -d /var/lib/pgsql/data/"
> > > BUT what should I put for option -D? ("new cluster data") Was
> > > (directory) already created for me (during system upgrade), or have I to
> > > create it somewhere (where? is best practice).
> > > 
> > > There are NO logs for today trial to start in
> > > /var/lib/pgsql/data/pg_log/. Where they could be?
> > You would call "initdb" or "postgresql-10-setup" to create a new, empty
> > cluster in version 10 and use that with -D.
> Ok, so "step-by-step":
> 1), I copy / move "somewhere" OLD DB files (*/pgsql/data/* for -d option),
> 2). Do initdb / postgresql-10-setup to create NEW empty base (in
> /var/lib/pgsql/ or  somewhere, for -D option),
> 3). do pg_upgrade.
> 
> Is that correct?
> 
> Is there somewhere "guide for 9.x -> 10.x CONCEPTS changes (and upgrade)
> guide"? (clusters etc).

The pg_upgade docs have all the steps.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: System (and DB) upgrade problem., "part 2"

2021-01-18 Thread Bruce Momjian
On Mon, Jan 18, 2021 at 03:13:13PM -0600, Ron wrote:
> On 1/18/21 2:58 PM, Bruce Momjian wrote:
> > On Mon, Jan 18, 2021 at 09:53:33PM +0100, W.P. wrote:
> [snip]
> > > Ok, so "step-by-step":
> > > 1), I copy / move "somewhere" OLD DB files (*/pgsql/data/* for -d option),
> > > 2). Do initdb / postgresql-10-setup to create NEW empty base (in
> > > /var/lib/pgsql/ or  somewhere, for -D option),
> > > 3). do pg_upgrade.
> > > 
> > > Is that correct?
> > > 
> > > Is there somewhere "guide for 9.x -> 10.x CONCEPTS changes (and upgrade)
> > > guide"? (clusters etc).
> > The pg_upgade docs have all the steps.
> 
> The documents tend to assume the reader thoroughly knows Postgresql, and
> that's manifestly Not True.

What is your point?  Maybe they shouldn't be using pg_upgrade then,
right?  If the documentaiton is unclear, please explain why.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: upgrade using logical replication

2021-01-20 Thread Bruce Momjian
On Thu, Jan 21, 2021 at 10:25:39AM +0900, Ian Lawrence Barwick wrote:
> 2021年1月21日(木) 9:19 Mohamed Wael Khobalatte :
> 
> 
> 
> On Wed, Jan 20, 2021 at 2:37 PM Michael Lewis  wrote:
> 
> Using pg_upgrade takes minutes for an in place upgrade. If you can
> allow 1+ hour of downtime, it seems overly complicated to use logical
> replication.
> 
> 
> I suppose the Atul's issue is what to do with the replicas. Once he does
> pg_upgrade, then he will need to provision new ones, no? I suppose in this
> case logical would be better, with the new instance itself having 
> replicas.
> I haven't done it, and it's gonna require some setup time, definitely much
> longer than pg_upgrade then make do with one server until your new 
> physical
> replicas are set up. 
> 
> 
> The replicas will need to be set up at some point anyway; with logical
> replication the new cluster is ready to go once the new primary is fully
> "seeded" (and the new replicas have caught up with that). Switchover can then
> take place whenever convenient, with minimal downtime, more time for testing,
> and the possibility of switching back if issues are encountered.
> 
> Potential downsides to this approach are that the database schema may need to
> be
> modified to be suitable for logical replication, and additional resources may
> be
> needed to host the old and new clusters simultaneously during the migration
> process.

pg_upgrade docs have instructions on how to upgrade replicas in place
using rsync with hard links.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Postgres freelancing sites

2021-01-28 Thread Bruce Momjian
On Thu, Jan 28, 2021 at 06:44:37PM +0200, Yambu wrote:
> Hello
> 
> May I know where I can get freelancer jobs for postgres?

Well, we have a jobs email list:

https://wiki.postgresql.org/wiki/Job_postings

and instructions on how to post job _openings_.  I think it can also be
used for people who post looking for jobs.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Database upgrade Pg11 to Pg12 issues

2021-02-03 Thread Bruce Momjian
On Wed, Feb  3, 2021 at 11:25:11PM +, Simon Windsor wrote:
> Hi
> 
> I have upgraded many Pg databases from 9 to 10 to 11 using the Ubuntu
> pg_upgradecluster command (wrapper to pg_update) without issues, however today
> I upgraded a DB from 11 to 12.
> 
> Using the syntax
> 
> pg_upgradecluser -k -m upgrade 11 main
> 
> using the latest 12.5 binaries all appeared to work well, and for a 330G DB
> this took 2 minutes. On starting the DB using
> 
> systemctl start postgresql@12.service
> 
> the DB started, but the screen showed the default directory was /var/lib/
> postgresql/11/main. Checking the open database with
> 
> show data_directory; gave /var/lib/postgresql/12/main
> 
> Then using lsof | grep '11/main' gave no files, but lsof | grep '12/main' gave
> over 30K files, Checking the directory sizes suggests that 11/main is 300G and
> 12/main is 40G

This doesn't make sense to me.  Since we hard-linked, why would 12 be so
much smaller?  If it was symlinks, I could imaging that, but it doesn't
use symlinks, just hard links, so it should be similar.  Please look at
the size of main/base on both, since that is where your data is.  Maybe
11 just has a lot of old WAL that isn't copied.

> So all appears OK, with files hard linked between 11/main and 12/main, and I
> assume the issue relates to the PGDATA/postgresql.auto.conf file being just
> copied from the original 11/main with data_directory being set to /var/lib/
> postgresql/11/main.
> 
> If I where to run pg_dropcluster 11 main to remove the old database and conf
> files, will this destroy my running Pg12 database with hard linked files in 
> 11/
> main and 12/main? In theory it shouldn't and files from 11/main and 12/main
> will all be under 12/main, I just have doubts.
> 
> Is there an easy way to tidy this up?

Yes, if the sizes made sense, removing 11 would be fine, but right now,
it doesn't sound right.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





  1   2   3   >