Re: Qualifying use of separate TABLESPACES (performance/functionality)

2024-07-17 Thread Imndlf8r

On 7/16/2024 9:57 PM, Christophe Pettus wrote:

On Jul 16, 2024, at 21:45, imndl...@gmx.com wrote: Or, does Postgres
expect to be able to access any media however it wants (i.e., R/w),
regardless of the expected access patterns of the data stored there?


Well, yes and no.

PostgreSQL will not respond well to having media that is literally read only
in the sense that a write operation to it will fail.


Picking a nit, here:   if the write() never returns an error, then Postgres
never (?) knows that the write "didn't happen" (as expected).

If Postgres is writing the same data that ALREADY EXISTS, then the
physical medium could be immutable and Postgres would never know (see
next comment)


At some point, it will
need to (for example) vacuum the tables, and that will means writes.


But, if nothing has ever been *written* (UPDATE) to that tablespace, what
value would VACUUM have?


 That
being said, if the only thing in a tablespace are tables (and their indexes)
that are written once then never again, you won't be constantly getting
writes to them.


Are there ever any "anonymous"/temporary objects that might be created
alongside these?


You may want to do a VACUUM (ANALYZE, FREEZE) on the tables
in those tablespaces once the data is loaded.


Ideally, I would like to find a way to "import" a preexisting table, index,
etc. without literally having to have two copies (the "original" being
read and the copy being *created*).  Having the original fill that role
*immutably* assures the customer that the original data hasn't been
altered in any way, even during its "initialization" within the RDBMS.

[It also avoids the issue of ever needing two *physical* copies of the data]


PostgreSQL will be generating WAL as you do data-modifying operations, so
that should be aimed at storage that very low write fatigue.


Yes, but there is also the need to safeguard against read disturb induced
errors.  Using MLC/TLC/QLC/PLC devices on the "static" portions of the
data puts them at higher risk.  The FLASH controller needs to rewrite
errored pages, even if the application isn't explicitly altering the data.

I.e., "read only" can be interpreted on several different layers:
- the application never calls for a write (but the DBMS silently does)
- the DBMS never calls for a write (but the memory controller silently does)
- the physical memory is truly immutable

The last being the ideal.


Be very
cautious about using a RAM disk for anything, though, unless you are *very*
confident the battery backup on it is 100% reliable.  PostgreSQL isn't
designed to handle recovery from having the WAL just disappear out from
under it on a crash.


The server is battery backed.  As long as the server can "fix things" before
the battery is exhausted, then all is well.  (Imagine the case of a physical
disk dying; what recourse, there?)

My recovery strategy is to checkpoint the database periodically so the most
recent snapshot can be reloaded.

As I said, it's an appliance.  You don't have a DBA attending to the
database in your vehicle's navigation system (the system would be deemed
too costly if it had to support such an "accessory"  :> ).  The DBMS is
expected to be as reliable as the underlying data store.  It abstracts
references to the data in the store and ensures applications can't create
"bad" data (strict typing, constraints, triggers, etc.).





Support of Postgresql 15 for Sles15Sp6

2024-07-17 Thread Akram Hussain
Hi,

I am new to this community.

I have one question.

Is there any planned release of postgresql 14 for Sles15SP6.

Could you please help me with this.

Regards,
Akram.


PgbackRest and EDB Query

2024-07-17 Thread KK CHN
Hi ,

I am trying pgbackrest(2.52.1)  with postgresql( version 16)  on  a lab
setup on RHEL-9. Both  PostgreSQL server and a remote Repository host
configured with pgbackrest and everything working fine as specified in the
documentation.

note:  here I am running postgres server and pgbackrest everything as
postgres user and no issues in  backup and recovery.



Query
1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
backup and recovery process? Or pgback works only with the community
PostgreSQL database ?


[ when I ran  initdb script of EDB while installing EDB it creates the
enterpisedb  as user and edb as initial  database by the script. ]


when I try to create the stanza on the EDB server it throws error  (pasted
at bottom ).



NOTE:
I know that  my EDB  running on  port 5444 instead of  5432 and the dbname
= edb instead of postgres, and user as  enterpisedb instead of postgres how
to specify these changes in the stanza creation step if  EDB Supports
pgbackrest tool ?

OR   Am I doing a waste exercise  [if pgbackrest won't go ahead with EDB ] ?


Any hints much appreciated.

Thank you,
Krishane


ERROR:
root@uaterssdrservice01 ~]# sudo -u postgres pgbackrest --stanza=OD_DM2
--log-level-console=info  stanza-create
2024-07-17 17:42:13.935 P00   INFO: stanza-create command begin 2.52.1:
--exec-id=1301876-7e055256 --log-level-console=info --log-level-file=debug
--pg1-path=/var/lib/pgsql/16/data --repo1-host=10.x.y.7
--repo1-host-user=postgres --stanza=OD_DM2
WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='postgres' port=5432': connection to server on socket
"/tmp/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that
socket?
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2024-07-17 17:42:13.936 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@uaterssdrservice01 ~]#


Re: PgbackRest and EDB Query

2024-07-17 Thread azeem subhani
Hi,

Your query:
1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
backup and recovery process? Or pgback works only with the community
PostgreSQL database ?

Answer: Yes, you can use pgBackRest with EPAS-16 (EDB Postgres Advanced
Server); it is fully supported. You can find two use cases provided on the
EDB official documentation website, detailing both scenarios: pgBackRest
running locally or on a dedicated remote server. The following links are
given for more information:


https://www.enterprisedb.com/docs/supported-open-source/pgbackrest/
https://www.enterprisedb.com/docs/supported-open-source/pgbackrest/06-use_case_1/

On Wed, Jul 17, 2024 at 5:21 PM KK CHN  wrote:

> Hi ,
>
> I am trying pgbackrest(2.52.1)  with postgresql( version 16)  on  a lab
> setup on RHEL-9. Both  PostgreSQL server and a remote Repository host
> configured with pgbackrest and everything working fine as specified in the
> documentation.
>
> note:  here I am running postgres server and pgbackrest everything as
> postgres user and no issues in  backup and recovery.
>
>
>
> Query
> 1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
> backup and recovery process? Or pgback works only with the community
> PostgreSQL database ?
>
>
> [ when I ran  initdb script of EDB while installing EDB it creates the
> enterpisedb  as user and edb as initial  database by the script. ]
>
>
> when I try to create the stanza on the EDB server it throws error  (pasted
> at bottom ).
>
>
>
> NOTE:
> I know that  my EDB  running on  port 5444 instead of  5432 and the dbname
> = edb instead of postgres, and user as  enterpisedb instead of postgres how
> to specify these changes in the stanza creation step if  EDB Supports
> pgbackrest tool ?
>
> OR   Am I doing a waste exercise  [if pgbackrest won't go ahead with EDB ]
> ?
>
>
> Any hints much appreciated.
>
> Thank you,
> Krishane
>
>
> ERROR:
> root@uaterssdrservice01 ~]# sudo -u postgres pgbackrest --stanza=OD_DM2
> --log-level-console=info  stanza-create
> 2024-07-17 17:42:13.935 P00   INFO: stanza-create command begin 2.52.1:
> --exec-id=1301876-7e055256 --log-level-console=info --log-level-file=debug
> --pg1-path=/var/lib/pgsql/16/data --repo1-host=10.x.y.7
> --repo1-host-user=postgres --stanza=OD_DM2
> WARN: unable to check pg1: [DbConnectError] unable to connect to
> 'dbname='postgres' port=5432': connection to server on socket
> "/tmp/.s.PGSQL.5432" failed: No such file or directory
> Is the server running locally and accepting connections on that
> socket?
> ERROR: [056]: unable to find primary cluster - cannot proceed
>HINT: are all available clusters in recovery?
> 2024-07-17 17:42:13.936 P00   INFO: stanza-create command end: aborted
> with exception [056]
> [root@uaterssdrservice01 ~]#
>
>
>
>
>

-- 
Thanks
Azeem Subhani


Re: Support of Postgresql 15 for Sles15Sp6

2024-07-17 Thread Adrian Klaver

On 7/17/24 03:37, Akram Hussain wrote:

Hi,

I am new to this community.

I have one question.

Is there any planned release of postgresql 14 for Sles15SP6.


Your subject says 'Support of Postgresql 15 ...' which version do you want?



Could you please help me with this.

Regards,
Akram.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Support of Postgresql 15 for Sles15Sp6

2024-07-17 Thread Tom Lane
Adrian Klaver  writes:
> On 7/17/24 03:37, Akram Hussain wrote:
>> Is there any planned release of postgresql 14 for Sles15SP6.

> Your subject says 'Support of Postgresql 15 ...' which version do you want?

The answer's the same either way.  The prebuilt packages the PG
community makes for SUSE are described here:

https://www.postgresql.org/download/linux/suse/

If you don't want those, and you don't want the ones that SUSE
themselves build, your third option is to build from source.

Offhand I would think that the community RPMs would serve you
fine.  The page above says they are built for SLES 15 SP5,
but SUSE would have to have screwed up pretty badly for a
package built on SP5 to not run on SP6.

regards, tom lane




Re: Semantic cache capability for Postgresql db

2024-07-17 Thread pradeep t
Thanks, I'll check it out.
Also is there any python client for the connection to pgprewarm?
Is the existing Postgresql python client will support the pgprewarm cache
usage also?

On Tue, Jul 16, 2024 at 9:01 PM Kashif Zeeshan 
wrote:

> Hi Pradeep
>
> pgprewarm Extension is available for Postgres
>
> https://www.postgresql.org/docs/current/pgprewarm.html
>
>
>
> On Tue, Jul 16, 2024 at 7:05 PM pradeep t  wrote:
>
>> Hi,
>>
>> Shall I use the Postgresql database for a* semantic cache *like the
>> Redis semantic cache?
>> Are we currently supporting such usage?
>>
>> --
>> Thanks and regards
>> Pradeep.T
>>
>

-- 
Thanks and regards
Pradeep.T


Issue with configuration parameter "require_auth"

2024-07-17 Thread Gaisford, Phillip
I am having trouble using require_auth 
(https://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-CONNECT-REQUIRE-AUTH).


Using golang sqlx.Connect on the client side, the connection fails and my 
Postgresql 16 server logs the following:

2024-07-17 14:42:07.285 UTC [115] FATAL:  unrecognized configuration parameter 
"require_auth"



What could the issue be?



Re: Issue with configuration parameter "require_auth"

2024-07-17 Thread Tom Lane
"Gaisford, Phillip"  writes:
> Using golang sqlx.Connect on the client side, the connection fails and my 
> Postgresql 16 server logs the following:
> 2024-07-17 14:42:07.285 UTC [115] FATAL:  unrecognized configuration 
> parameter "require_auth"

require_auth is a connection parameter, like host or port or dbname.
You seem to be trying to specify it as a server parameter, ie inside
the "options" connection parameter.

regards, tom lane




Re: Issue with configuration parameter "require_auth"

2024-07-17 Thread David G. Johnston
On Wednesday, July 17, 2024, Gaisford, Phillip 
wrote:

> I am having trouble using require_auth (https://www.postgresql.org/
> docs/16/libpq-connect.html#LIBPQ-CONNECT-REQUIRE-AUTH).
>
>
>
> Using golang sqlx.Connect on the client side, the connection fails and my
> Postgresql 16 server logs the following:
>
> 2024-07-17 14:42:07.285 UTC [115] FATAL:  unrecognized configuration
> parameter "require_auth"
>
>
>
> What could the issue be?
>
>
>

Seems like a bug or version mis-match in the client library.  That is a new
option in v16 so it may not be handled properly.  That option is
client-only, the client should not be sending it to the server, which the
error message indicates it is.  Unless you can reproduce with psql you’ll
need to check your installation environment (maybe you only have v15 or
less client libraries installed?) or file a bug with the driver project.

David J.


Re: PgbackRest and EDB Query

2024-07-17 Thread Kashif Zeeshan
Hi

On Wed, Jul 17, 2024 at 5:21 PM KK CHN  wrote:

> Hi ,
>
> I am trying pgbackrest(2.52.1)  with postgresql( version 16)  on  a lab
> setup on RHEL-9. Both  PostgreSQL server and a remote Repository host
> configured with pgbackrest and everything working fine as specified in the
> documentation.
>
> note:  here I am running postgres server and pgbackrest everything as
> postgres user and no issues in  backup and recovery.
>
>
>
> Query
> 1. Is it possible to use  PgBackrest with  EnterpriseDB(EDB -16) for the
> backup and recovery process? Or pgback works only with the community
> PostgreSQL database ?
>
It support both community PG and EDB PG.

>
>
> [ when I ran  initdb script of EDB while installing EDB it creates the
> enterpisedb  as user and edb as initial  database by the script. ]
>
Enterprisedb is the default user created by EDB.

>
>
> when I try to create the stanza on the EDB server it throws error  (pasted
> at bottom ).
>
>
>
> NOTE:
> I know that  my EDB  running on  port 5444 instead of  5432 and the dbname
> = edb instead of postgres, and user as  enterpisedb instead of postgres how
> to specify these changes in the stanza creation step if  EDB Supports
> pgbackrest tool ?
>
You can enter this connection information in the PbBackRest Conf file for
the stanza you create for your EDB Instance.

e.g

[global]repo1-path=/var/lib/edb/as15/backups
[demo]pg1-path=/var/lib/edb/as15/datapg1-user=enterprisedbpg1-port=5444pg-version-force=15

Refer to following edb documentation

https://www.enterprisedb.com/docs/supported-open-source/pgbackrest/03-quick_start/


> OR   Am I doing a waste exercise  [if pgbackrest won't go ahead with EDB ]
> ?
>
>
> Any hints much appreciated.
>
> Thank you,
> Krishane
>
>
> ERROR:
> root@uaterssdrservice01 ~]# sudo -u postgres pgbackrest --stanza=OD_DM2
> --log-level-console=info  stanza-create
> 2024-07-17 17:42:13.935 P00   INFO: stanza-create command begin 2.52.1:
> --exec-id=1301876-7e055256 --log-level-console=info --log-level-file=debug
> --pg1-path=/var/lib/pgsql/16/data --repo1-host=10.x.y.7
> --repo1-host-user=postgres --stanza=OD_DM2
> WARN: unable to check pg1: [DbConnectError] unable to connect to
> 'dbname='postgres' port=5432': connection to server on socket
> "/tmp/.s.PGSQL.5432" failed: No such file or directory
> Is the server running locally and accepting connections on that
> socket?
> ERROR: [056]: unable to find primary cluster - cannot proceed
>HINT: are all available clusters in recovery?
> 2024-07-17 17:42:13.936 P00   INFO: stanza-create command end: aborted
> with exception [056]
> [root@uaterssdrservice01 ~]#
>
>
>
>
>


Re: Issue with configuration parameter "require_auth"

2024-07-17 Thread Gaisford, Phillip
Interesting…

So what would a connection URI using require_auth properly look like? Here’s 
what I have been trying:

url: 
"postgres://postgres:postgres@localhost:5432/aioli?require_auth=password&application_name=aioli-controller&sslmode=disable&sslrootcert=",


From: Tom Lane 
Date: Wednesday, July 17, 2024 at 11:24 AM
To: Gaisford, Phillip 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Issue with configuration parameter "require_auth"
"Gaisford, Phillip"  writes:
> Using golang sqlx.Connect on the client side, the connection fails and my 
> Postgresql 16 server logs the following:
> 2024-07-17 14:42:07.285 UTC [115] FATAL:  unrecognized configuration 
> parameter "require_auth"

require_auth is a connection parameter, like host or port or dbname.
You seem to be trying to specify it as a server parameter, ie inside
the "options" connection parameter.

regards, tom lane



Re: Support of Postgresql 15 for Sles15Sp6

2024-07-17 Thread Matthias Apitz
El día miércoles, julio 17, 2024 a las 10:50:33a. m. -0400, Tom Lane escribió:

> Adrian Klaver  writes:
> > On 7/17/24 03:37, Akram Hussain wrote:
> >> Is there any planned release of postgresql 14 for Sles15SP6.
> 
> > Your subject says 'Support of Postgresql 15 ...' which version do you want?
> 
> The answer's the same either way.  The prebuilt packages the PG
> community makes for SUSE are described here:
> 
> https://www.postgresql.org/download/linux/suse/
> 
> If you don't want those, and you don't want the ones that SUSE
> themselves build, your third option is to build from source.

We always configured and compiled from source on SuSE SLES12 since 11.x 
and now 15.1 and 16.2 on SuSE SLES15 SP5 (which runs also on SP6).

Said this, is also depends if the pre-build RPM were configured for
OpenSSL 1.x or 3.x

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub

I am not at war with Russia.
Я не воюю с Россией.
Ich bin nicht im Krieg mit Russland.




Re: Semantic cache capability for Postgresql db

2024-07-17 Thread Ron Johnson
What does "python client will support the pgprewarm cache usage" mean?

Read the pgprewarm docs.

On Wed, Jul 17, 2024 at 11:10 AM pradeep t  wrote:

> Thanks, I'll check it out.
> Also is there any python client for the connection to pgprewarm?
> Is the existing Postgresql python client will support the pgprewarm cache
> usage also?
>
> On Tue, Jul 16, 2024 at 9:01 PM Kashif Zeeshan 
> wrote:
>
>> Hi Pradeep
>>
>> pgprewarm Extension is available for Postgres
>>
>> https://www.postgresql.org/docs/current/pgprewarm.html
>>
>>
>>
>> On Tue, Jul 16, 2024 at 7:05 PM pradeep t 
>> wrote:
>>
>>> Hi,
>>>
>>> Shall I use the Postgresql database for a* semantic cache *like the
>>> Redis semantic cache?
>>> Are we currently supporting such usage?
>>>
>>> --
>>> Thanks and regards
>>> Pradeep.T
>>>
>>
>
> --
> Thanks and regards
> Pradeep.T
>


Re: Semantic cache capability for Postgresql db

2024-07-17 Thread Christophe Pettus



> On Jul 12, 2024, at 06:18, pradeep t  wrote:
> Shall I use the Postgresql database for a semantic cache like the Redis 
> semantic cache?

If you mean this feature:


https://redis.io/docs/latest/integrate/redisvl/user-guide/semantic-caching/

... there is no direct equivalent in PostgreSQL.  This is something you'd have 
to implement on top of PostgreSQL's existing extensions, such as pgvector:

https://github.com/pgvector/pgvector



Re: Issue with configuration parameter "require_auth"

2024-07-17 Thread Tom Lane
"Gaisford, Phillip"  writes:
> So what would a connection URI using require_auth properly look like? Here’s 
> what I have been trying:

> url: 
> "postgres://postgres:postgres@localhost:5432/aioli?require_auth=password&application_name=aioli-controller&sslmode=disable&sslrootcert=",

Hm, that looks right (and behaves as-expected here).

What version of what client-side stack are you using?  For me, libpq 16
and up recognize this parameter, while older versions fail with

psql: error: invalid URI query parameter: "require_auth"

If you're using some other client driver, maybe it doesn't know this
parameter and guesses that it should be passed to the server.

regards, tom lane




Planet Postgres and the curse of AI

2024-07-17 Thread Greg Sabino Mullane
I've been noticing a growing trend of blog posts written mostly, if not
entirely, with AI (aka LLMs, ChatGPT, etc.). I'm not sure where to raise
this issue. I considered a blog post, but this mailing list seemed a better
forum to generate a discussion.

The problem is two-fold as I see it.

First, there is the issue of people trying to game the system by churning
out content that is not theirs, but was written by a LLM. I'm not going to
name specific posts, but after a while it gets easy to recognize things
that are written mostly by AI.

These blog posts are usually generic, describing some part of Postgres
in an impersonal, mid-level way. Most of the time the facts are not
wrong, per se, but they lack nuances that a real DBA would bring to the
discussion, and often leave important things out. Code examples are often
wrong in subtle ways. Places where you might expect a deeper discussion are
glossed over.

So this first problem is that it is polluting the Postgres blogs with
overly bland, moderately helpful posts that are not written by a human, and
do not really bring anything interesting to the table. There is a place for
posts that describe basic Postgres features, but the ones written by humans
are much better. (yeah, yeah, "for now" and all hail our AI overlords in
the future).

The second problem is worse, in that LLMs are not merely gathering
information, but have the ability to synthesize new conclusions and facts.
In short, they can lie. Or hallucinate. However you want to call it, it's a
side effect of the way LLMs work. In a technical field like Postgres, this
can be a very bad thing. I don't know how widespread this is, but I was
tipped off about this over a year ago when I came across a blog suggesting
using the "max_toast_size configuration parameter". For those not
familiar, I can assure you that Postgres does not have, nor will likely
ever have, a GUC with that name.

As anyone who has spoken with ChatGPT knows, getting small important
details correct is not its forte. I love ChatGPT and actually use it daily.
It is amazing at doing certain tasks. But writing blog posts should not be
one of them.

Do we need a policy or a guideline for Planet Postgres? I don't know. It
can be a gray line. Obviously spelling and grammar checking is quite
okay, and making up random GUCs is not, but the middle bit is very hazy.
(Human) thoughts welcome.

Cheers,
Greg


Re: Planet Postgres and the curse of AI

2024-07-17 Thread Pavel Stehule
st 17. 7. 2024 v 19:22 odesílatel Greg Sabino Mullane 
napsal:

> I've been noticing a growing trend of blog posts written mostly, if not
> entirely, with AI (aka LLMs, ChatGPT, etc.). I'm not sure where to raise
> this issue. I considered a blog post, but this mailing list seemed a better
> forum to generate a discussion.
>
> The problem is two-fold as I see it.
>
> First, there is the issue of people trying to game the system by churning
> out content that is not theirs, but was written by a LLM. I'm not going to
> name specific posts, but after a while it gets easy to recognize things
> that are written mostly by AI.
>
> These blog posts are usually generic, describing some part of Postgres
> in an impersonal, mid-level way. Most of the time the facts are not
> wrong, per se, but they lack nuances that a real DBA would bring to the
> discussion, and often leave important things out. Code examples are often
> wrong in subtle ways. Places where you might expect a deeper discussion are
> glossed over.
>
> So this first problem is that it is polluting the Postgres blogs with
> overly bland, moderately helpful posts that are not written by a human, and
> do not really bring anything interesting to the table. There is a place for
> posts that describe basic Postgres features, but the ones written by humans
> are much better. (yeah, yeah, "for now" and all hail our AI overlords in
> the future).
>
> The second problem is worse, in that LLMs are not merely gathering
> information, but have the ability to synthesize new conclusions and facts.
> In short, they can lie. Or hallucinate. However you want to call it, it's a
> side effect of the way LLMs work. In a technical field like Postgres, this
> can be a very bad thing. I don't know how widespread this is, but I was
> tipped off about this over a year ago when I came across a blog suggesting
> using the "max_toast_size configuration parameter". For those not
> familiar, I can assure you that Postgres does not have, nor will likely
> ever have, a GUC with that name.
>
> As anyone who has spoken with ChatGPT knows, getting small important
> details correct is not its forte. I love ChatGPT and actually use it daily.
> It is amazing at doing certain tasks. But writing blog posts should not be
> one of them.
>
> Do we need a policy or a guideline for Planet Postgres? I don't know. It
> can be a gray line. Obviously spelling and grammar checking is quite
> okay, and making up random GUCs is not, but the middle bit is very hazy.
> (Human) thoughts welcome.
>

It is very unpleasant to read a long article, and at the end to understand
so there is zero valuable information. Terrible situation was on planet
mariadb https://mariadb.org/planet/, but now it was cleaned. I am for some
form of moderating - and gently touching an author that writes articles
without extra value against documentation.

Regards

Pavel



>
> Cheers,
> Greg
>
>


Re: Planet Postgres and the curse of AI

2024-07-17 Thread Kashif Zeeshan
Hi Greg

I agree with you on the misuse of AI based tools, as per my experience with
Postgres the solutions suggested wont work at times.
Its not bad to get help from these tools but put all the solutions from
there is counter productive.
I think People should take care while using these tools while suggesting
solutions for real world problems.

Regards
Kashif Zeeshan

On Wed, Jul 17, 2024 at 10:22 PM Greg Sabino Mullane 
wrote:

> I've been noticing a growing trend of blog posts written mostly, if not
> entirely, with AI (aka LLMs, ChatGPT, etc.). I'm not sure where to raise
> this issue. I considered a blog post, but this mailing list seemed a better
> forum to generate a discussion.
>
> The problem is two-fold as I see it.
>
> First, there is the issue of people trying to game the system by churning
> out content that is not theirs, but was written by a LLM. I'm not going to
> name specific posts, but after a while it gets easy to recognize things
> that are written mostly by AI.
>
> These blog posts are usually generic, describing some part of Postgres
> in an impersonal, mid-level way. Most of the time the facts are not
> wrong, per se, but they lack nuances that a real DBA would bring to the
> discussion, and often leave important things out. Code examples are often
> wrong in subtle ways. Places where you might expect a deeper discussion are
> glossed over.
>
> So this first problem is that it is polluting the Postgres blogs with
> overly bland, moderately helpful posts that are not written by a human, and
> do not really bring anything interesting to the table. There is a place for
> posts that describe basic Postgres features, but the ones written by humans
> are much better. (yeah, yeah, "for now" and all hail our AI overlords in
> the future).
>
> The second problem is worse, in that LLMs are not merely gathering
> information, but have the ability to synthesize new conclusions and facts.
> In short, they can lie. Or hallucinate. However you want to call it, it's a
> side effect of the way LLMs work. In a technical field like Postgres, this
> can be a very bad thing. I don't know how widespread this is, but I was
> tipped off about this over a year ago when I came across a blog suggesting
> using the "max_toast_size configuration parameter". For those not
> familiar, I can assure you that Postgres does not have, nor will likely
> ever have, a GUC with that name.
>
> As anyone who has spoken with ChatGPT knows, getting small important
> details correct is not its forte. I love ChatGPT and actually use it daily.
> It is amazing at doing certain tasks. But writing blog posts should not be
> one of them.
>
> Do we need a policy or a guideline for Planet Postgres? I don't know. It
> can be a gray line. Obviously spelling and grammar checking is quite
> okay, and making up random GUCs is not, but the middle bit is very hazy.
> (Human) thoughts welcome.
>
> Cheers,
> Greg
>
>


Re: Issue with configuration parameter "require_auth"

2024-07-17 Thread Gaisford, Phillip
I think Tom is right.

I am using v5.6 of https://github.com/jackc/pgx

I quick read of the code reveals no awareness of require_auth, so I have filed 
an issue with the project.

From: Tom Lane 
Date: Wednesday, July 17, 2024 at 1:11 PM
To: Gaisford, Phillip 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Issue with configuration parameter "require_auth"
"Gaisford, Phillip"  writes:
> So what would a connection URI using require_auth properly look like? Here’s 
> what I have been trying:

> url: 
> "postgres://postgres:postgres@localhost:5432/aioli?require_auth=password&application_name=aioli-controller&sslmode=disable&sslrootcert=",

Hm, that looks right (and behaves as-expected here).

What version of what client-side stack are you using?  For me, libpq 16
and up recognize this parameter, while older versions fail with

psql: error: invalid URI query parameter: "require_auth"

If you're using some other client driver, maybe it doesn't know this
parameter and guesses that it should be passed to the server.

regards, tom lane


Re: Planet Postgres and the curse of AI

2024-07-17 Thread Adrian Klaver

On 7/17/24 10:21, Greg Sabino Mullane wrote:
I've been noticing a growing trend of blog posts written mostly, if not 
entirely, with AI (aka LLMs, ChatGPT, etc.). I'm not sure where to raise 
this issue. I considered a blog post, but this mailing list seemed a 
better forum to generate a discussion.






Do we need a policy or a guideline for Planet Postgres? I don't know. It 
can be a gray line. Obviously spelling and grammar checking is quite 
okay, and making up random GUCs is not, but the middle bit is very hazy. 
(Human) thoughts welcome.


A policy would be nice, just not sure how enforceable it would be. How 
do you differentiate between the parrot that is AI and one that is 
human? I run across all manner of blog posts where folks have lifted 
content from the documentation or other sources without attribution, 
which is basically what AI generated content is. AI does like to 
embellish and make things up(ask the NYC lawyer suing the airlines about 
that), though that is a human trait as well.




Cheers,
Greg



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Query on partitioned table needs memory n_partitions * work_mem

2024-07-17 Thread David Rowley
On Fri, 12 Jul 2024 at 02:08, Dimitrios Apostolou  wrote:
> I can't help but see this as a bug. I see many issues:
>
> * postgres is not reading from partitions in parallel, but one after the
>other. It shouldn't need all this memory simultaneously.

I don't know for Hash Aggregate, but for nodes like Sort, we still
hold onto the tuplestore after returning the last tuple as a rescan
might want to read those tuples again. There's also a mark/restore
that might want to rewind a little to match up to the next outer tuple
of a Merge Join.

It might be possible to let go of the memory sooner in plans when
returning the final tuple means we'll never need the memory again, but
that would require figuring out all the cases where that could happen
and ensuring we don't ever release memory when it's required again.

> * The memory is unnecessarily allocated early on, before any partitions
>are actually aggregated. I know this because I/O is slow on this device
>and the table sizes are huge, it's simply not possible that postgres
>went through all partitions and blew up the memory. That would take
>hours, but the OOM happens seconds after I start the query.

That's interesting. Certainly, there is some memory allocated during
executor startup, but that amount should be fairly small.  Are you
able to provide a self-contained test case that shows the memory
blowing up before execution begins?

> Having wasted long time in that, the minimum I can do is submit a
> documentation patch. At enable_partitionwise_aggregate someting like
> "WARNING it can increase the memory usage by at least
> n_partitions * work_mem". How do I move on for such a patch? Pointers
> would be appreciated. :-)

I think mentioning something about this in enable_partitionwise_join
and enable_partitionwise_aggregate is probably wise. I'll propose a
patch on pgsql-hackers.

David




Re: Support of Postgresql 14 for Sles15Sp6

2024-07-17 Thread Akram Hussain
Sorry for the typo.

My intention was Postgresql version 14 on Sles15sp6.

Any help would be appreciated

Regards,
Akram.
On Wed, 17 Jul, 2024, 4:07 pm Akram Hussain,  wrote:

> Hi,
>
> I am new to this community.
>
> I have one question.
>
> Is there any planned release of postgresql 14 for Sles15SP6.
>
> Could you please help me with this.
>
> Regards,
> Akram.
>


Re: Support of Postgresql 14 for Sles15Sp6

2024-07-17 Thread Adrian Klaver

On 7/17/24 15:43, Akram Hussain wrote:

Sorry for the typo.

My intention was Postgresql version 14 on Sles15sp6.

Any help would be appreciated


As Tom Lane pointed out there is this:

https://zypp.postgresql.org/howtozypp/

There is also:

https://build.opensuse.org/package/show/server:database:postgresql/postgresql14




Regards,
Akram.
On Wed, 17 Jul, 2024, 4:07 pm Akram Hussain, > wrote:


Hi,

I am new to this community.

I have one question.

Is there any planned release of postgresql 14 for Sles15SP6.

Could you please help me with this.

Regards,
Akram.



--
Adrian Klaver
adrian.kla...@aklaver.com





Searching for libpq5-13 and libpq5-devel-13 for CentOS/RHEL 7

2024-07-17 Thread H
Searching for the above. RHEL/CentOS 7 went EOL two weeks ago and it seems the 
two files above have disappeared. Does anyone know where I can find them? 
Needed for a project to work with PostgreSQL
13.

Thanks.





Re: Searching for libpq5-13 and libpq5-devel-13 for CentOS/RHEL 7

2024-07-17 Thread H
On Wed, 2024-07-17 at 20:36 -0400, H wrote:
> Searching for the above. RHEL/CentOS 7 went EOL two weeks ago and it seems 
> the two files above have disappeared. Does anyone know where I can find them? 
> Needed for a project to work with PostgreSQL
> 13.
>
> Thanks.

I should have added that I searched pgdg13/7/x86_64 as well as EPEL etc.





Re: Searching for libpq5-13 and libpq5-devel-13 for CentOS/RHEL 7

2024-07-17 Thread Adrian Klaver

On 7/17/24 17:36, H wrote:

Searching for the above. RHEL/CentOS 7 went EOL two weeks ago and it seems the 
two files above have disappeared. Does anyone know where I can find them? 
Needed for a project to work with PostgreSQL
13.



Are they not in?:

https://yum.postgresql.org/13/redhat/rhel-7-x86_64/repoview/postgresql13-libs.html

And here?:

https://yum.postgresql.org/13/redhat/rhel-7-x86_64/repoview/postgresql13-devel.html


Thanks.





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Searching for libpq5-13 and libpq5-devel-13 for CentOS/RHEL 7

2024-07-17 Thread H
On Wed, 2024-07-17 at 17:58 -0700, Adrian Klaver wrote:
> On 7/17/24 17:36, H wrote:
> > Searching for the above. RHEL/CentOS 7 went EOL two weeks ago and it seems 
> > the two files above have disappeared. Does anyone know where I can find 
> > them? Needed for a project to work with
> > PostgreSQL
> > 13.
> >
>
> Are they not in?:
>
> https://yum.postgresql.org/13/redhat/rhel-7-x86_64/repoview/postgresql13-libs.html
>
> And here?:
>
> https://yum.postgresql.org/13/redhat/rhel-7-x86_64/repoview/postgresql13-devel.html
>
> > Thanks.
> >
> >
> >
>

It was, I had missed it. Thank you.





psql sslmode behavior and trace_connection_negotiation in PG17

2024-07-17 Thread Hao Zhang
Hi

I tried to connect with psql + client sslmode = require + server requiring
ssl with PG17 and trace_connection_negotiation = "on". So "SSLRequest
accepted" is logged twice with two different PID. I believe the PID 15553
is psql and 15554 is the PG backend. How do you explain the two connections
with SSLRequest? From the log, it seems psql made a connection to itself
with SSLRequest and proxied that to Postgres server with a full SSL
negotiation. I never saw a log on 15553's connection being closed when I
closed the psql process. Does this behavior match what was talked about in
the below hacker thread on additional connection?

2024-07-17 03:06:54.492 PDT [15553] LOG:  connection received:
host=127.0.0.1 port=54002
2024-07-17 03:06:54.492 PDT [15553] LOG:  SSLRequest accepted
2024-07-17 03:06:59.982 PDT [15554] LOG:  connection received:
host=127.0.0.1 port=54004
2024-07-17 03:06:59.982 PDT [15554] LOG:  SSLRequest accepted
2024-07-17 03:06:59.994 PDT [15554] LOG:  connection authenticated:
identity="postgres" method=md5 (/usr/local/pgsql/data/pg_hba.conf:18)
2024-07-17 03:06:59.994 PDT [15554] LOG:  connection authorized:
user=postgres database=postgres application_name=psql SSL enabled
(protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256)


https://www.postgresql.org/message-id/flat/CAM-w4HOEAzxyY01ZKOj-iq=M4-VDk=vzqgusuqitfjfdzae...@mail.gmail.com


Re: psql sslmode behavior and trace_connection_negotiation in PG17

2024-07-17 Thread Tom Lane
Hao Zhang  writes:
> I tried to connect with psql + client sslmode = require + server requiring
> ssl with PG17 and trace_connection_negotiation = "on". So "SSLRequest
> accepted" is logged twice with two different PID. I believe the PID 15553
> is psql and 15554 is the PG backend.

Certainly not: psql has no ability to write to the postmaster log.
These PIDs are two different backend sessions.

> How do you explain the two connections
> with SSLRequest?

> 2024-07-17 03:06:54.492 PDT [15553] LOG:  connection received:
> host=127.0.0.1 port=54002
> 2024-07-17 03:06:54.492 PDT [15553] LOG:  SSLRequest accepted
> 2024-07-17 03:06:59.982 PDT [15554] LOG:  connection received:
> host=127.0.0.1 port=54004
> 2024-07-17 03:06:59.982 PDT [15554] LOG:  SSLRequest accepted
> 2024-07-17 03:06:59.994 PDT [15554] LOG:  connection authenticated:
> identity="postgres" method=md5 (/usr/local/pgsql/data/pg_hba.conf:18)
> 2024-07-17 03:06:59.994 PDT [15554] LOG:  connection authorized:
> user=postgres database=postgres application_name=psql SSL enabled
> (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256)

The first connection comes from psql trying to connect and discovering
that a password is required.  It gives up on that connection because
it hasn't got a password, and asks the user (you) for that.  We can
see from the log that it took about five-n-a-half seconds for you
to type your password, and then there was a second connection attempt
that was successful.

By default, the server doesn't log unceremonious client disconnections
after the password challenge, precisely because psql behaves this way.
So that's why we don't see any disconnection log entry from the
ill-fated 15553 session.  I kind of wonder if we could suppress these
other log entries too, but it's not very clear how.

If this behavior really annoys you, you can use psql's -W switch
to force it to prompt for a password in advance of knowing whether
the server will demand one.

regards, tom lane




Regarding vacuum freeze locking mechanism

2024-07-17 Thread Durgamahesh Manne
Hi

Do new inserts block while performing vacuum freeze operations ?

when autovacuum runs , it will freeze the transaction ID (TXID) of the
table it's working on. This means that any transactions that started before
autovacuum began will be allowed to complete.but new transactions will be
blocked until the autovacuum finishes.

Could you please provide more clarity on this? Which lock triggers on the
tables are being used by freeze?

Your response is highly appreciated

Regards,
Durga Mahesh


Re: Regarding vacuum freeze locking mechanism

2024-07-17 Thread Kashif Zeeshan
Hi

On Thu, Jul 18, 2024 at 10:26 AM Durgamahesh Manne <
maheshpostgr...@gmail.com> wrote:

> Hi
>
> Do new inserts block while performing vacuum freeze operations ?
>
Generally, VACUUM FREEZE does not block inserts as PG uses Multi-Version
Concurrency Control (MVCC) which allows multiple transactions to operate on
the same table without interfering with each other.


>
> when autovacuum runs , it will freeze the transaction ID (TXID) of the
> table it's working on. This means that any transactions that started before
> autovacuum began will be allowed to complete.but new transactions will be
> blocked until the autovacuum finishes.
>
> Could you please provide more clarity on this? Which lock triggers on the
> tables are being used by freeze?
>
> Your response is highly appreciated
>
> Regards,
> Durga Mahesh
>
>
>


Re: Regarding vacuum freeze locking mechanism

2024-07-17 Thread David G. Johnston
On Wednesday, July 17, 2024, Durgamahesh Manne 
wrote:

> when autovacuum runs , it will freeze the transaction ID (TXID) of the
> table it's working on.
>

This statement is incorrect.  A table as a whole does not have a txid.
Freezing makes it so individual tuples get assigned an always-in-the-past
txid.  Then, the table can recompute how far away its furthest back txid is
from being considered in-the-future.  That gap should increase since the
furthest away txids were the ones being frozen.

Inserts might get delayed a brief moment if it just happens the page they
want to insert onto is presently being worked on.  But the odds there seem
low.

David J.


Re: Regarding vacuum freeze locking mechanism

2024-07-17 Thread David G. Johnston
On Wednesday, July 17, 2024, Durgamahesh Manne 
wrote:

>
> Could you please provide more clarity on this? Which lock triggers on the
> tables are being used by freeze?
>

 https://www.postgresql.org/docs/current/explicit-locking.html

Share update exclusive

David J.