Re: Help in vetting error using "pg_upgrade" - steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2025-01-24 Thread Bharani SV-forum
 AdrianThanks
This is the exact error which the system admin is facing

 postgresql15-contrib installation on Amazon Linux 2 fails on Python shared lib 
dependency

"https://www.postgresql.org/message-id/CABUevEz_OHR%2BaGU%2B7yuhpfJD%2BeWgC8aHgVRjP5U30kJqu%2B7jmg%40mail.gmail.com";
On Friday, January 24, 2025 at 11:22:23 AM EST, Adrian Klaver 
 wrote:  
 
 On 1/24/25 07:50, Bharani SV-forum wrote:
> Team
> Need your additional input.
> VM is based on EC2 OS Version = Amazon Linux 2
> Existing DB version = 13.X - Pg Community under EC2 - VM (Same VM)
> Target DB version = 15.x  - Pg Community under EC2 - VM (Same VM)
> 
> i will narrate the latest error
> 
> 
> 
> executing: SELECT pg_catalog.set_config('search_path', '', false);
> Checking for presence of required libraries                 fatal
> 

> 
> Any guidance how to come the error.
> 
> my unix system admin is quoting that  he is facing built issue with 
> postgres15 version w.r.to " postgresql15-contrib.x86_64 "
> under AWS based OS " Amazon Linux 2  ".
> His version is "It needs libpython3.6m.so.1.0()(64bit) to install 
> package: postgresql15-contrib-15.10-1PGDG.rhel7.x86_64. in our 
> environment. I can't install python3.6 libraries because we already have 
> a different python version installed and clashing"

 From what I know Amazon Linux 2 is a RH clone, confirmed by 
postgresql15-contrib-15.10-1PGDG.rhel7.x86_64 above.

 From link below it seems it is possible to run multiple versions of 
Python together RH7 so it should work on AL2 also(?):

https://developers.redhat.com/blog/install-python3-rhel#installing_python_3_on_rhel_7

> 
> Any quidance to overcome the error, as the existing python version being 
> used by us is clashing with the pre.req version python ver 3.6
> 
> My unix admin, too quoted me to have it installed under OS RHEL7 w.r.to 
> postgresql - EC2 version
> Best Viable option. I have tried with previous suggestion steps and 
> found "pg_upgrade" as the most viable and faster
> 
> Regards


-- 
Adrian Klaver
adrian.kla...@aklaver.com

  

Re: Help in vetting error using "pg_upgrade" - steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2025-01-24 Thread Adrian Klaver




On 1/24/25 10:01 AM, Bharani SV-forum wrote:

Adrian
Thanks

This is the exact error which the system admin is facing


    postgresql15-contrib installation on Amazon Linux 2 fails on Python
  shared lib dependency


Which from your post the admin said was due to:

"His version is "It needs libpython3.6m.so.1.0()(64bit)" "


Note the libpython3.6.


The link I posted previously:

https://developers.redhat.com/blog/install-python3-rhel#installing_python_3_on_rhel_7

Shows how to install Python 3.6


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Help in vetting error using "pg_upgrade" - steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2025-01-24 Thread Adrian Klaver

On 1/24/25 07:50, Bharani SV-forum wrote:

Team
Need your additional input.
VM is based on EC2 OS Version = Amazon Linux 2
Existing DB version = 13.X - Pg Community under EC2 - VM (Same VM)
Target DB version = 15.x  - Pg Community under EC2 - VM (Same VM)

i will narrate the latest error



executing: SELECT pg_catalog.set_config('search_path', '', false);
Checking for presence of required libraries                 fatal





Any guidance how to come the error.

my unix system admin is quoting that  he is facing built issue with 
postgres15 version w.r.to " postgresql15-contrib.x86_64 "

under AWS based OS " Amazon Linux 2  ".
His version is "It needs libpython3.6m.so.1.0()(64bit) to install 
package: postgresql15-contrib-15.10-1PGDG.rhel7.x86_64. in our 
environment. I can't install python3.6 libraries because we already have 
a different python version installed and clashing"


From what I know Amazon Linux 2 is a RH clone, confirmed by 
postgresql15-contrib-15.10-1PGDG.rhel7.x86_64 above.


From link below it seems it is possible to run multiple versions of 
Python together RH7 so it should work on AL2 also(?):


https://developers.redhat.com/blog/install-python3-rhel#installing_python_3_on_rhel_7



Any quidance to overcome the error, as the existing python version being 
used by us is clashing with the pre.req version python ver 3.6


My unix admin, too quoted me to have it installed under OS RHEL7 w.r.to 
postgresql - EC2 version
Best Viable option. I have tried with previous suggestion steps and 
found "pg_upgrade" as the most viable and faster


Regards



--
Adrian Klaver
adrian.kla...@aklaver.com





Help in vetting error using "pg_upgrade" - steps for Postgres DB upgrade from Ver 13.X to ver 15.X

2025-01-24 Thread Bharani SV-forum
 TeamNeed your additional input.VM is based on EC2 OS Version = Amazon Linux 2 
Existing DB version = 13.X - Pg Community under EC2 - VM (Same VM)Target DB 
version = 15.x  - Pg Community under EC2 - VM (Same VM)
i will narrate the latest error


executing: SELECT pg_catalog.set_config('search_path', '', false);Checking for 
presence of required libraries                 fatal
Your installation references loadable libraries that are missing from thenew 
installation.  You can add these libraries to the new installation,or remove 
the functions using them from the old installation.  A list ofproblem libraries 
is in the file:    
/var/lib/pgsql/15/data/pg_upgrade_output.d/20250122T161405.335/loadable_libraries.txt

output of "  
/var/lib/pgsql/15/data/pg_upgrade_output.d/20250122T161405.335/loadable_libraries.txt"
 is"could not load library "$libdir/dblink": ERROR:  could not access file 
"$libdir/dblink": No such file or directoryIn database: main"
I cross checked on the  existing (old db _version package list along with new 
db version package listand found one of the package is 
missingpostgresql15-contrib.x86_64 


Any guidance how to come the error.
my unix system admin is quoting that  he is facing built issue with postgres15 
version w.r.to " postgresql15-contrib.x86_64 " under AWS based OS "  Amazon 
Linux 2  ". His version is "It needs libpython3.6m.so.1.0()(64bit) to install 
package: postgresql15-contrib-15.10-1PGDG.rhel7.x86_64. in our environment. I 
can't install python3.6 libraries because we already have a different python 
version installed and clashing"
Any quidance to overcome the error, as the existing python version being used 
by us is clashing with the pre.req version python ver 3.6
My unix admin, too quoted me to have it installed under OS RHEL7 w.r.to 
postgresql - EC2 version Best Viable option. I have tried with previous 
suggestion steps and found "pg_upgrade" as the most viable and faster 
Regards 
On Tuesday, December 31, 2024 at 11:15:26 AM EST, Bharani SV-forum 
 wrote:  
 
  Team
I followed Greg suggested steps .One of big had only one table and around four 
million recordsi am doing dev env restoration into new vmthe target VM env is 
an POC server and took 3 hrs to restore four million records.Now it is doing 
process of lo_open / lo_close /  lowrite  etci.e 
pg-dump-creates-a-lot-of-pg-catalog-statements
is there any alternate way , to speedup  this process.
i can see in the select count(*) record count is matching (target and source)
Regards

On Wednesday, December 4, 2024 at 10:47:26 AM EST, Greg Sabino Mullane 
 wrote:  
 
 On Wed, Dec 4, 2024 at 7:42 AM Bharani SV-forum  
wrote:
 a) is the above said steps is correct with the given existing and proposed 
setup

No. Here are some steps:
* Install Postgres on the new VMHowever you get it, use the newest version you 
can. As of this writing, it is Postgres 17.2. Version 15 is okay, but going to 
17 now means a better Postgres today, and no worrying about replacing v15 in 
three years.
* Create a new Postgres clusterOn the new VM, use the initdb command to create 
a new data directory.Use the --data-checksums option
* Start it upAdjust your postgresql.conf as neededAdjust your pg_hba.conf as 
neededInstall any extensions used on the old VMStart the cluster using the 
pg_ctl command (or systemctl)
* Test connection to the old vm from the new vmOn the new vm, see if you can 
connect to the old one:psql -h oldvm -p 5432 --listYou may need to adjust 
firewalls and pg_hba.conf on the old vm.
* Copy the dataRun this on the new VM, adjusting ports as needed:time 
pg_dumpall -h oldvm -p 5432 | psql -p 5432
Bonus points for doing this via screen/tmux to prevent interruptions
* Generate new statistics and vacuumOn the new vm, run:psql -c 'vacuum 
freeze'psql -c 'analyze'
* Test your application
* Setup all the other stuff (systemd integration, logrotate, cronjobs, etc.) as 
needed
As Peter mentioned earlier, this can be done without disrupting anything, and 
is easy to test and debug. The exact steps may vary a little, as I'm not 
familiar with how Amazon Linux packages Postgres, but the basics are the same.
Take it slow. Go through each of these steps one by one. If you get stuck or 
run into an issue, stop and solve it, reaching out to this list as necessary.
Cheers,Greg


Re: Is postgresql's json strong consistency or eventually consistency?

2025-01-24 Thread Torsten Förtsch
On Fri, Jan 24, 2025 at 4:48 AM anlex N  wrote:

> Hello Adrian, Laurenz. Have you tried postgresql's json in your everyday
> life? How fast is it? how is it than mongodb?
>

My honest opinion, JSON(B) in PG is great as data transfer format but not
so much for storage. Here is an example. Up to last year I had a table in
my DB with rows consisting of a bunch of numbers in many columns. Over
maybe 15 years of business that table had grown to a few TB in size. Then
the development team decided they needed more flexibility and started to
fill a new table with the same information in JSON. After only 1 year of
business that table is now in the same size range.

Another example, the developers came up with a new table to store the
result of the JSON response they got from a 3rd party application. That
worked for a while and the table behaved normally. Suddenly it started
growing by a few GB a day. Turns out the 3rd party had decided to include
an MB-sized picture in the JSON response. Our team then faithfully stored
all of that crap unfiltered in the DB.

That is not an answer to your original question, of course. How fast it is
depends very much on the use case. If you are talking about access methods
alone, the ability of indexing JSON and such, then PG is on par with mongo
if not better. But nobody but you can give you a definite answer relating
to your situation.


Re: Is postgresql's json strong consistency or eventually consistency?

2025-01-24 Thread Torsten Förtsch
On Thu, Jan 16, 2025 at 2:02 PM Laurenz Albe 
wrote:

> On Thu, 2025-01-16 at 15:25 +0300, anlex N wrote:
> > I have searched all archives, but have no results. Please help me.
>
> That's probably because the question doesn't make much sense.
> A single PostgreSQL instance (=cluster) enforces consistency,
> and it doesn't matter which data type you are using.
>
> So I guess the answer should be "strong consistency".


I guess what the original author was trying to ask is how many databases
have to acknowledge a transaction before it is considered committed, stuff
that's called in mongodb write_concern and read_concern.

@anlex, PG at its heart is not a distributed system. When you talk about
consistency in PG you normally mean foreign key consistency or similar,
properties where data in one table is somehow related to data in another
table. A slightly different meaning of consistency comes when you talk
about transaction isolation levels.

All of this is different from Mongodb's read/write consistency. In Mongodb
a write can go to one machine while the subsequent read can fetch data from
another. So, you worry about linearizability, causal consistency and such
things. The only slightly similar question in PG is "Can I see data
somehow, on the master or a replica, that has been written to the database
but where the master has not yet acknowledged the commit to the client?"

The short answer is, yes. As soon as the commit record has been written to
the WAL, it is being transferred and replayed on streaming replicas. After
that the master might still have to do things like waiting for a
synchronous replica to acknowledge the transaction before the transaction
becomes visible on the master. On a fast enough replica, the transaction
can, hence, become visible before it's visible on the master.

https://foertsch.cc/en/postgres/commit-timing

You can control that behavior with the `synchronous_commit` setting in
combination with `synchronous_standby_names`. You can tell the DB, for
instance, a transaction should be considered as committed only when 3 out
of 5 replicas have acknowledged it.

If you set synchronous_commit=remote_apply and configure N out of N
replicas in synchronous_standby_names, that would likely give you a
distributed database with strong consistency where you can direct writes to
the master and read-only load to the replicas. But I guess nobody in his
right mind would do that unless your database is rarely written to.