Re: PG Dump on 11 - Restore on 16 - is possible?

2024-08-13 Thread Kashif Zeeshan
Hi

On Tue, Aug 13, 2024 at 2:47 PM Durumdara  wrote:

> Hello!
>
> We have to upgrade our server. The PG upgrade is impossible because of too
> much data.
> So we will create a new server with PG 16, and we have to migrate the
> databases one by one (from a PG 11 server).
>
> Is it possible?
>
> Does PG16 read PG11 dumps and restore them correctly on the PG16 server?
>
PG provides backward compatibility so you can restore PG11 backup to PG16.

>
> Or is it unsupported now?
>
> Thank you!
>
> Best regards
> dd
>


Re: PG Dump on 11 - Restore on 16 - is possible?

2024-08-13 Thread Ron Johnson
On Tue, Aug 13, 2024 at 5:47 AM Durumdara  wrote:

> Hello!
>
> We have to upgrade our server. The PG upgrade is impossible because of too
> much data.
> So we will create a new server with PG 16, and we have to migrate the
> databases one by one (from a PG 11 server).
>
> Is it possible?
>
> Does PG16 read PG11 dumps and restore them correctly on the PG16 server?
>

https://www.postgresql.org/docs/release/16.0/
"See Section 19.6  for
general information on migrating to new major releases."

Section 19.6: https://www.postgresql.org/docs/16/upgrading.html
"Current releases of the dump programs can read data from any server
version back to 9.2."

The major flaw in Section 19.6 is that it repeatedly says "use pg_dumpall",
and only mentions pg_dump once.  Don't use pg_dumpall; use pg_dump in
"directory mode" with multiple threads.  ("pg_dumpall --globals" is
critical for roles and whatnot.)

This is how I recently migrated three multi-TB PG 9.6 RHEL6 servers to PG
14 on RHEL8.

-- 
Death to America, and butter sauce.
Iraq lobster!


Re: PG Dump on 11 - Restore on 16 - is possible?

2024-08-13 Thread Ron Johnson
On Tue, Aug 13, 2024 at 5:47 AM Durumdara  wrote:

> Hello!
>
> We have to upgrade our server. The PG upgrade is impossible because of too
> much data.
> So we will create a new server with PG 16, and we have to migrate the
> databases one by one (from a PG 11 server).
>

I'd also recommend that you update the PG 11 server to 11.22 if possible.

It can hopefully be found here:
https://yum.postgresql.org/repopackages/#pgredhatoldrepos

-- 
Death to America, and butter sauce.
Iraq lobster!


Re: Insert works but fails for merge

2024-08-13 Thread Greg Sabino Mullane
I just remembered that one of the complaints was not wanting to worry about
looking up the data types. In my previous example, you can also leave out
the types and Postgres will do the right thing. I prefer the explicit data
type version for clarity, but though I would provide this one for
completeness:

prepare foo as with x as (update tab1 set mid=$2 where id=$1 returning 1)
  insert into tab1 select $1,$2,$3 where not exists (select 1 from x);

Cheers,
Greg


Re: Column type modification in big tables

2024-08-13 Thread Greg Sabino Mullane
On Sat, Aug 10, 2024 at 5:06 PM Lok P  wrote:


> Can someone through some light , in case we get 5-6hrs downtime for this
> change , then what method should we choose for this Alter operation?
>

We can't really answer that. Only you know what resources you have, what
risk/reward you are willing to handle, and how long things may take. For
that latter item, your best bet is to try this out on the same/similar
hardware and see how long it takes. Do a smaller table and extrapolate if
you need to. Or promote one of your replicas offline and modify that. I've
given you a low-risk / medium-reward option with check constraints, but for
the ALTER TABLE options you really need to try it and see (on non-prod).

it seems the "USING" clause takes more time as compared to normal ALTER.
> But again I don't see any way to see the progress and estimated completion
> time. Can you share your thoughts on this?


There should be no difference if they are doing the same conversion.

 Will this approach be faster/better as compared to the simple "alter table
> alter column approach" as above


Seems a lot more complicated to me than a simple ALTER. But measurement is
key. Create a new test cluster using pgBackRest or whatever you have. Then
run your ALTER TABLE and see how long it takes (remember that multiple
columns can be changed in a single ALTER TABLE statement).

Cheers,
Greg


Novice with Postgresql - trying simple Stored Procedure

2024-08-13 Thread jim.kosloskey
I have a simple 3 column table. The Table is a Month Name to number table
which also includes an effective date column. So 12 rows, 3 columns each.



Here is the Stored Procedure I constructed to get the number if given the
name (3 parameters for the SP):



--/
DROP PROCEDURE month_lookup
(in mth_name TEXT,
inout mth_num TEXT,
inout ret_cd TEXT);
CREATE OR REPLACE PROCEDURE month_lookup
(in mth_name TEXT,
inout mth_num TEXT,
inout ret_cd TEXT default '0^00')
  LANGUAGE plpgsql AS $$
 BEGIN
   -- SET mth_num = num WHERE name = mth_name;
  SELECT
  DISTINCT month_nm_2_num.month_nm_2_num.num
  FROM
  month_nm_2_num.month_nm_2_num
  WHERE
 month_nm_2_num.month_nm_2_num.name = mth_name
  ORDER BY
 month_nm_2_num.month_nm_2_num.name ASC
   INTO mth_num;
  RAISE NOTICE '%', mth_num;
 RETURN;
 END; $$;
/



Here is the invocation of that SP:



--/
DO $$
DECLARE
mth_name TEXT;
ret_cd TEXT;
mth_num TEXT;
BEGIN
call month_lookup ('Jan','00',null);
/* raise notice '%', mth_num;*/
END $$;
/



Here is the error I am receiving (using DB-Visualizer):







I have tried a lot of different things. I have researched a number of
examples and I am stumped. I am fairly certain it is something simple and
related to the parameter in the SP.



Can anybody straighten me out?



Thanks in advance,



Jim Kosloskey



Re: Novice with Postgresql - trying simple Stored Procedure

2024-08-13 Thread Ron Johnson
On Tue, Aug 13, 2024 at 11:32 AM  wrote:

> I have a simple 3 column table. The Table is a Month Name to number table
> which also includes an effective date column. So 12 rows, 3 columns each.
>
>
>
> Here is the Stored Procedure I constructed to get the number if given the
> name (3 parameters for the SP):
>
>
>
> *--/*
> DROP PROCEDURE month_lookup
> (in mth_name TEXT,
> inout mth_num TEXT,
> inout ret_cd TEXT);
> CREATE OR REPLACE PROCEDURE month_lookup
> (in mth_name TEXT,
> inout mth_num TEXT,
> inout ret_cd TEXT default '0^00')
>   LANGUAGE plpgsql AS $$
>  BEGIN
>*-- SET mth_num = num WHERE name = mth_name; *
>   SELECT
>   DISTINCT month_nm_2_num.month_nm_2_num.num
>   FROM
>   month_nm_2_num.month_nm_2_num
>   WHERE
>  month_nm_2_num.month_nm_2_num.name = mth_name
>   ORDER BY
>  month_nm_2_num.month_nm_2_num.name ASC
>INTO mth_num;
>   RAISE NOTICE '%', mth_num;
>  RETURN;
>  END; $$;
> /
>
>
>
> Here is the invocation of that SP:
>
>
>
> *--/*
> DO $$
> DECLARE
> mth_name TEXT;
> ret_cd TEXT;
> mth_num TEXT;
> BEGIN
> call month_lookup ('Jan','00',null);
> */* raise notice '%', mth_num;*/*
> END $$;
> /
>
>
>
> Here is the error I am receiving (using DB-Visualizer):
>
>
>
>
>
> I have tried a lot of different things. I have researched a number of
> examples and I am stumped. I am fairly certain it is something simple and
> related to the parameter in the SP.
>
>
PG wants to be able to write to mnth_num, since it's an inout param.  But
it can't write to the constant literal '00'.

Thus, I'd try calling month_lookup with a variable that's set to '00'.

-- 
Death to America, and butter sauce.
Iraq lobster!


RE: Novice with Postgresql - trying simple Stored Procedure

2024-08-13 Thread jim.kosloskey
Ron,

 

Thank you – that got me past that. I thought I had used a variable before with 
no joy but I probably had other issues.

 

Hopefully, I am on my way now.

 

Thanks again,

 

Jim

 

From: Ron Johnson  
Sent: Tuesday, August 13, 2024 12:06 PM
To: pgsql-generallists.postgresql.org 
Subject: Re: Novice with Postgresql - trying simple Stored Procedure

 

On Tue, Aug 13, 2024 at 11:32 AM mailto:jim.koslos...@jim-kosloskey.com> > wrote:

I have a simple 3 column table. The Table is a Month Name to number table which 
also includes an effective date column. So 12 rows, 3 columns each.

 

Here is the Stored Procedure I constructed to get the number if given the name 
(3 parameters for the SP):

 

--/
DROP PROCEDURE month_lookup 
(in mth_name TEXT,
inout mth_num TEXT,
inout ret_cd TEXT);
CREATE OR REPLACE PROCEDURE month_lookup 
(in mth_name TEXT,
inout mth_num TEXT,
inout ret_cd TEXT default '0^00') 
  LANGUAGE plpgsql AS $$
 BEGIN
   -- SET mth_num = num WHERE name = mth_name; 
  SELECT 
  DISTINCT month_nm_2_num.month_nm_2_num.num 
  FROM 
  month_nm_2_num.month_nm_2_num 
  WHERE 
 month_nm_2_num.month_nm_2_num.name 
  = mth_name
  ORDER BY 
 month_nm_2_num.month_nm_2_num.name 
  ASC
   INTO mth_num;
  RAISE NOTICE '%', mth_num;
 RETURN;
 END; $$;
/

 

Here is the invocation of that SP:

 

--/
DO $$
DECLARE 
mth_name TEXT;
ret_cd TEXT;
mth_num TEXT;
BEGIN
call month_lookup ('Jan','00',null);
/* raise notice '%', mth_num;*/
END $$;
/

 

Here is the error I am receiving (using DB-Visualizer):

 



 

I have tried a lot of different things. I have researched a number of examples 
and I am stumped. I am fairly certain it is something simple and related to the 
parameter in the SP.

 

PG wants to be able to write to mnth_num, since it's an inout param.  But it 
can't write to the constant literal '00'.

 

Thus, I'd try calling month_lookup with a variable that's set to '00'.

 

-- 

Death to America, and butter sauce.

Iraq lobster!



Re: PG Dump on 11 - Restore on 16 - is possible?

2024-08-13 Thread Tomas Vondra
On 8/13/24 11:47, Durumdara wrote:
> Hello!
> 
> We have to upgrade our server. The PG upgrade is impossible because of
> too much data.

I'm not sure I understand. What exactly is the problem with the amount
of data? How much data are you dealing with, and is the problem in the
amount it takes to copy the data, or do you have limited amount of disk
space, or what is the issue?

I'd say pg_upgrade (in link mode) is probably orders of magnitude faster
for upgrading large clusters - probably less than a minute even for huge
clusters (multiple TB of data). And it doesn't need more disk space, it
does not copy most of the data. Typically, you create a replica on the
new system (or clone the fs volume in some way), and then switch over
and do the pg_upgrade.

pg_dump is pretty expensive - it exports and imports the database, has
to rebuild indexes, ... I'd be surprised if this wasn't more laborious
and time consuming than the pg_upgrade way.


> So we will create a new server with PG 16, and we have to migrate the
> databases one by one (from a PG 11 server).
> 
> Is it possible?
> 
> Does PG16 read PG11 dumps and restore them correctly on the PG16 server?
> 

Yes. The docs actually say this:

  Because pg_dump is used to transfer data to newer versions of
  PostgreSQL, the output of pg_dump can be expected to load into
  PostgreSQL server versions newer than pg_dump's version.

> Or is it unsupported now?
> 

It's true PG11 is EOL already, but AFAIK the dump/restore should work,
we try to support even unsupported versions - I don't recall if the
policy is ~10 years, but the docs say:

  pg_dump can also dump from PostgreSQL servers older than its own
  version. (Currently, servers back to version 9.2 are supported.)

I'd probably use pg_dump from the target version (PG16), but it should
not matter.


regards

-- 
Tomas Vondra




Re: PG Dump on 11 - Restore on 16 - is possible?

2024-08-13 Thread Ron Johnson
On Tue, Aug 13, 2024 at 2:21 PM Tomas Vondra  wrote:

> On 8/13/24 11:47, Durumdara wrote:
> > Hello!
> >
> > We have to upgrade our server. The PG upgrade is impossible because of
> > too much data.
>
> I'm not sure I understand. What exactly is the problem with the amount
> of data?
>

OP has a physical server, and the cabinet's disk trays are all full (or at
least not enough free trays to add enough disk space).

But, you say, 20TB disks exist!!  Yeah, and the channel speed is slow.
Besides, given that it's PG 11, the OS (RHEL?) is probably also EOL (or
soon to be).

Thus, better to buy a new / faster server, install RHEL9 and PG 16.

-- 
Death to America, and butter sauce.
Iraq lobster!


Re: autovacuum freeze recommendations at table level

2024-08-13 Thread Durgamahesh Manne
Hi Semab

ALTER TABLE table SET (
   autovacuum_freeze_max_age = 6000,(6 crores)
   autovacuum_multixact_freeze_max_age = 6000,(6 crores)
   autovacuum_freeze_min_age = 0
);
I set this but autovacuum to prevent wraparound runs for every 10 minutes
on the table being modified in this case
How to minimize the impact of this ?
There is no option to set naptime at table level
I could see total records around 40 lakhs that are being modified
Toast considers large objects but I use parameters without toast.

Regards,
Durga Mahesh Manne


On Mon, Aug 12, 2024 at 10:07 PM semab tariq  wrote:

> Hi Durga
>
> *autovacuum_freeze_max_age* specifies the maximum age (in transactions)
> that a table's tuples can reach before a vacuum is forced to prevent
> transaction ID wraparound. when the age of the oldest tuple in the table
> exceeds this value, an autovacuum is triggered to freeze the tuples.
> *Recommendation = 2000 -> 15000 *
>
> *autovacuum_multixact_freeze_max_age *It is similar to above, but applies
> to multi-transaction IDs (used for shared row locks). when the age of the
> oldest multi-transaction ID exceeds this value, an autovacuum is triggered
> to freeze the multi-transaction IDs.
> *Recommendation = 2000 -> 15000*
>
> *autovacuum_freeze_min_age* specifies the minimum age (in transactions)
> that a tuple must reach before it is considered for freezing. Lowering this
> value can cause more frequent freezing, which can increase the overhead of
> autovacuum.
> *Recommendation = 0 -> 5000*
>
> Thanks, Semab
>
> On Sun, Aug 11, 2024 at 11:12 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> Hi  Respected Team,
>>
>> Could you please let me know that how this freeze parameters work
>> Update query runs on table  through which data being modified daily in
>> this case
>> Total records in table is about 20lakhs
>> current setting for this table is
>> Access method: heap
>> if it reaches > 0.1*200+1000 = 2,10,000 as per the formula autovacuum
>> triggers
>> Options: fillfactor=85, autovacuum_vacuum_cost_delay=0,
>> autovacuum_vacuum_cost_limit=3000, parallel_workers=6,
>> autovacuum_vacuum_scale_factor=0.1, autovacuum_vacuum_threshold=1000,
>> autovacuum_freeze_max_age=2000,
>> autovacuum_multixact_freeze_max_age=2000, autovacuum_freeze_min_age=0
>>
>> How autovacuum freeze parameters work.Give me some recommendations to
>> improve the performance better than now
>> Ex :ALTER TABLE table SET (
>>autovacuum_freeze_max_age = 2000,(2 crores)
>>autovacuum_multixact_freeze_max_age = 2000,(2 crores)
>>autovacuum_freeze_min_age = 0
>> );
>> Regards,
>> Durga Mahesh
>>
>


Re: Column type modification in big tables

2024-08-13 Thread Lok P
On Tue, Aug 13, 2024 at 7:39 PM Greg Sabino Mullane 
wrote:

> On Sat, Aug 10, 2024 at 5:06 PM Lok P  wrote:
>
>
>> Can someone through some light , in case we get 5-6hrs downtime for this
>> change , then what method should we choose for this Alter operation?
>>
>
> We can't really answer that. Only you know what resources you have, what
> risk/reward you are willing to handle, and how long things may take. For
> that latter item, your best bet is to try this out on the same/similar
> hardware and see how long it takes. Do a smaller table and extrapolate if
> you need to. Or promote one of your replicas offline and modify that. I've
> given you a low-risk / medium-reward option with check constraints, but for
> the ALTER TABLE options you really need to try it and see (on non-prod).
>
>
*"Do a smaller table and extrapolate if you need to. Or promote one of your
replicas offline and modify that. I've given you a low-risk / medium-reward
option with check constraints, but for the ALTER TABLE options you really
need to try it and see (on non-prod)."*

Is there any possible method(maybe by looking into the data dictionary
tables/views etc) to see the progress of the Alter statement by which we
can estimate the expected completion time of the "Alter" command? I
understand pg_stat_activity doesn't show any completion percentage of a
statement, but wondering if by any other possible way we can estimate the
amount of time it will take in prod for the completion of the ALTER command.


Re: Column type modification in big tables

2024-08-13 Thread veem v
On Tue, 13 Aug 2024 at 19:39, Greg Sabino Mullane 
wrote:

> On Sat, Aug 10, 2024 at 5:06 PM Lok P  wrote:
>
>
>> Can someone through some light , in case we get 5-6hrs downtime for this
>> change , then what method should we choose for this Alter operation?
>>
>
> We can't really answer that. Only you know what resources you have, what
> risk/reward you are willing to handle, and how long things may take. For
> that latter item, your best bet is to try this out on the same/similar
> hardware and see how long it takes. Do a smaller table and extrapolate if
> you need to. Or promote one of your replicas offline and modify that. I've
> given you a low-risk / medium-reward option with check constraints, but for
> the ALTER TABLE options you really need to try it and see (on non-prod).
>
>>
>>
What about if the OP opt a strategy something as below,
1) Detaching the partitions 2)Altering individual partitions with required
column type and length 3)Altering the table 4)Attaching the partitions back
to the main table

This should be faster and also a controlled fashion for each partition
individually.


Re: Column type modification in big tables

2024-08-13 Thread Peter J. Holzer
On 2024-08-14 01:26:36 +0530, Lok P wrote:
> Is there any possible method(maybe by looking into the data dictionary tables/
> views etc) to see the progress of the Alter statement by which we can estimate
> the expected completion time of the "Alter" command? I understand
> pg_stat_activity doesn't show any completion percentage of a statement, but
> wondering if by any other possible way we can estimate the amount of time it
> will take in prod for the completion of the ALTER command.

You could look at the data files. Tables in PostgreSQL are stored as a
series of 1GB files, so you watching them being created and/or read
gives you a pretty good idea about progress.

For example, here is an alter table (changing one column from int to
bigint)  on a 1.8 GB table on my laptop:

The original table: Two data files with 1 and 0.8 GB respectively:

22:26:51 1073741824 Aug 13 22:24 266648
22:26:51  853794816 Aug 13 22:26 266648.1

The operation begins: A data file for the new table appears:

22:26:55 1073741824 Aug 13 22:26 266648
22:26:55  853794816 Aug 13 22:26 266648.1
22:26:55   79298560 Aug 13 22:26 266659

... and grows:

22:26:57 1073741824 Aug 13 22:26 266648
22:26:57  853794816 Aug 13 22:26 266648.1
22:26:57  208977920 Aug 13 22:26 266659

... and grows:

22:26:59 1073741824 Aug 13 22:26 266648
22:26:59  853794816 Aug 13 22:26 266648.1
22:26:59  284024832 Aug 13 22:26 266659

and now the table has exceeded 1 GB, so there's a second file:

22:27:17 1073741824 Aug 13 22:26 266648
22:27:17 1073741824 Aug 13 22:27 266659
22:27:17  853794816 Aug 13 22:27 266648.1
22:27:173022848 Aug 13 22:27 266659.1

... and a third:

22:27:44 1073741824 Aug 13 22:26 266648
22:27:44 1073741824 Aug 13 22:27 266659
22:27:44 1073741824 Aug 13 22:27 266659.1
22:27:44  853794816 Aug 13 22:27 266648.1
22:27:44   36798464 Aug 13 22:27 266659.2

almost finished:

22:28:08 1073741824 Aug 13 22:26 266648
22:28:08 1073741824 Aug 13 22:27 266659
22:28:08 1073741824 Aug 13 22:27 266659.1
22:28:08  853794816 Aug 13 22:27 266648.1
22:28:08   36798464 Aug 13 22:28 266659.2

Done: The old table has been reduced to an empty file (not sure why
PostgreSQL keeps that around):

22:28:10 1073741824 Aug 13 22:27 266659
22:28:10 1073741824 Aug 13 22:27 266659.1
22:28:10   36798464 Aug 13 22:28 266659.2
22:28:10  0 Aug 13 22:28 266648

Of course you need to be postgres or root to do this. Be careful!

Watching the access times may be useful, too, but on Linux by default
the access time is only updated under some special circumstances, so
this may be misleading.

hp


-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature