searching for libpq5-14.1-42PGDG.rhel8.x86_64

2024-08-09 Thread 王瞿
Searching for the above.
The 
Does anyone kown where I can find it?
Needed for a project to work with Postgresql 14.


Thanks.








| |
wangq...@126.com
|
|
邮箱:wangq...@126.com
|

Trouble understanding how to avoid/manage ERROR: multixact "members" limit exceeded

2024-08-09 Thread Jim Vanns
Hi pggen community!

I am struggling with this error almost daily now and despite various
efforts, not succeeding in avoiding or dealing with it;

ERROR:  multixact "members" limit exceeded
DETAIL:  This command would create a multixact with 2 members, but the
remaining space is only enough for 0 members.
HINT:  Execute a database-wide VACUUM in database with OID 16467 with
reduced vacuum_multixact_freeze_min_age and
vacuum_multixact_freeze_table_age settings.

Runtime details follow below, but before that; I am now seeing the above
error almost daily after approximately 12 hours of normal or expected
behaviour and throughput. Then it hits and all writes are blocked etc. and
the service is largely unusable/unable to recover. Restarting PG does allow
autovacuum processes to kick in with aggressive vacuuming to handle the
multixact freezing, but that isn't a suitable solution! Although having
read sources that now explain why multixact XIDs exist and when they're
used, I am not able to properly figure out how to plan for it or configure
postgresql appropriately to handle it given our workload.

My questions are;

1) How should I be managing this? Although not new to PG, I am new to this
particular problem.
2) How can I confirm what processes/tables are contributing to this
multixact "members" limit?
3) What are the units for
vacuum_multixact_freeze_min_age/vacuum_multixact_freeze_table_age and how
should I be
setting them appropriately for my rates etc.? I can't really find
anything that explains this clearly.
4) How can I check that autovacuum workers are specifically able to freeze
multixact XIDs and thus avoid this?
5) Can I check if autovacuum is actually succeeding in its work?

Schema (brief):
10 tables
1 table (the 'hottest) partitioned by 2h ranges; so an extra 12x
partitions/relations a day, though only the most recent one might be
considered 'active'

System (brief):
PG: 15.5 w/ TimescaleDB 2.14 extension
Write heavy workload;
Mean Txn/s (low):   8k
Mean Txn/s (high): 10k
Mean rows/s: 100k
Concurrency: 32 threads (local socket sessions) for 'upserts' via primary
service plus auxiliary processes (background workers/autovacuum workers
etc.)

Pattern (brief):
COPY (binary) to session-local temporary tables (ON COMMIT DELETE)
INSERT FROM tt TO main_table(s)
UPDATE FROM tt TO main_table(s)
VACUUM tt (every 100k txns)

Config (excerpt):
# - Transactions - (based on a period mean of ~8k txn/s)
# See/ref;
# www.postgresql.org/docs/15/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
#
www.percona.com/blog/tuning-autovacuum-in-postgresql-and-autovacuum-internals
# blog.sentry.io/transaction-id-wraparound-in-postgres
#
https://blog.danslimmon.com/2023/12/11/concurrent-locks-and-multixacts-in-postgres
#
https://pganalyze.com/blog/5mins-postgres-multiXact-ids-foreign-keys-performance
# *What unit is 'age' here? No. of. transactions?*
vacuum_freeze_min_age = 2880 # 1h @ 8k/s
vacuum_multixact_freeze_min_age = 2880 # 1h @ 8k/s
autovacuum_freeze_max_age = 15760 # 2h @ 8k/s
autovacuum_multixact_freeze_max_age = 5760 # 2h @ 8k/s
vacuum_multixact_freeze_table_age = 11520 # 4h @ 8k/s

# We don't UPDATE or DELETE often; we never DELETE and only perform
# UPDATE operations every 6h due to internal cache(s). So we set this to
# zero so the formula doesn't consider it a major factor
autovacuum_vacuum_threshold = 0 # In tuples/rows UPDATEd or DELETEd
autovacuum_vacuum_insert_threshold = 5000 # In tuples/rows INSERTed

autovacuum_vacuum_scale_factor = 0.1 # 10%
autovacuum_analyze_scale_factor = 0.1 # 10%
autovacuum_vacuum_insert_scale_factor = 0.1 # 10%

autovacuum_naptime = 60
autovacuum_max_workers = 8

# Give autovacuum more credits to ensure a better chance at scanning
autovacuum_vacuum_cost_limit = 2000 # 10x the default

Thanks for your help and any guidance/knowledge you can share!

Jim

-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


Re: Column type modification in big tables

2024-08-09 Thread Lok P
On Fri, Aug 9, 2024 at 2:06 AM Greg Sabino Mullane 
wrote:

> On Thu, Aug 8, 2024 at 2:39 PM Lok P  wrote:
>
>> Can anybody suggest any other possible way here.
>>
>
> Sure - how about not changing the column type at all?
>
> > one of the columns from varchar(20) to varchar(2)
>
> ALTER TABLE foobar ADD CONSTRAINT twocharplease CHECK (length(mycol) <= 2)
> NOT VALID;
>
> > one of the columns from Number(10,2) to Numeric(8,2)
>
> ALTER TABLE foobar ADD CONSTRAINT eightprecision CHECK (mycol <= 10^8) NOT
> VALID;
>
> > two of the columns from varchar(20) to numeric(3)
>
> This one is trickier, as we don't know the contents, nor why it is going
> to numeric(3) - not a terribly useful data type, but let's roll with it and
> assume the stuff in the varchar is a number of some sort, and that we don't
> allow nulls:
>
> ALTER TABLE foobar ADD CONSTRAINT onekorless CHECK (mycol::numeric(3) is
> not null) NOT VALID;
>
> You probably want to check on the validity of the existing rows: see the
> docs on VALIDATE CONSTRAINT here:
>
> https://www.postgresql.org/docs/current/sql-altertable.html
>
>
>
Thank you so much. Will definitely try to evaluate this approach. The Only
concern I have is , as this data is moving downstream with exactly the same
data type and length , so will it cause the downstream code to break while
using this column in the join or filter criteria. Also I believe the
optimizer won't be able to utilize this information while preparing the
execution plan.

 Another thing , correct me if wrong, My understanding is  , if we want to
run the "validate constraint" command after running this "check constraint
with not valid" command, this will do a full table scan across all the
partitions , but it's still beneficial as compared to updating the columns
values for each rows. Correct me if I'm wrong.


Re: Getting specific partition from the partition name

2024-08-09 Thread veem v
This helps. Thank you very much.

On Fri, 9 Aug 2024 at 02:15, Greg Sabino Mullane  wrote:

> _MM_DD is already setup for sorting, so just do:
>
> SELECT table_name FROM information_schema.tables WHERE table_name ~
> 'table_part_p' ORDER BY 1 DESC;
>
> If you need to grab the numbers:
>
> SELECT substring('table_part_p2022_03_04' from '([\d_]+)$');
>
> Cheers,
> Greg
>
>


Re: Building v17 Beta2 on Windows

2024-08-09 Thread Dominique Devienne
On Tue, Aug 6, 2024 at 10:53 AM Dominique Devienne  wrote:
> On Mon, Aug 5, 2024 at 2:26 PM David Rowley  wrote:
> > Meson is now the only available method for Visual Studio builds."
>
> Thanks David. My colleague figured it out, thanks to your pointers. Cheers, 
> --DD

FYI, we ran into a surprising change, going from using Makefiles in beta2,
to using Meson in beta3, on Linux, for consistency with the Windows build.
The libraries are now in lib64 instead of lib. Yet another change to adapt to.
No big deal, just not 100% backward compatible, that's all. --DD




Re: searching for libpq5-14.1-42PGDG.rhel8.x86_64

2024-08-09 Thread Laurenz Albe
On Fri, 2024-08-09 at 17:18 +0800, 王瞿 wrote:
> Searching for the above.
> The 
> Does anyone kown where I can find it?
> Needed for a project to work with Postgresql 14.

The client libraries for this version are in
https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-8-x86_64/postgresql14-14.1-1PGDG.rhel8.x86_64.rpm

But you should use 14.13.  There is no compatibility problem with
using the latest minor version, only you have fewer bugs.

Yours,
Laurenz Albe




Re: libpq version macro to use or not PQsocketPoll

2024-08-09 Thread Dominique Devienne
On Tue, Aug 6, 2024 at 4:48 PM Dominique Devienne  wrote:
> On Tue, Aug 6, 2024 at 4:31 PM Tom Lane  wrote:
> > Dominique Devienne  writes:
> > Indeed, that's an oversight, and there's a number of other things
> > we added to libpq-fe.h between 16 and 17 that probably deserve
> > their own LIBPQ_HAS symbols.

Hi. Doesn't look like beta3 _has_ them (pun indented :)).
Was probably too late for the beta3 release train.

> We can have both. Also, version macros don't
> multiply the way _HAS_ macros do, over time.

I stumbled on such version macros today by chance.
And pg_config.h seems available on both Windows and Linux.
So I guess that's good enough for conditional compilation.

>From the old post I found, and your recent reply Tom,
I got the impression these macros didn't exist. FWIW. --DD

[ddevienne@acme include]$ sift VERSION pg*.h
pg_config.h:#define PACKAGE_VERSION "16.1"
pg_config.h:#define PG_MAJORVERSION "16"
pg_config.h:#define PG_MAJORVERSION_NUM 16
pg_config.h:#define PG_MINORVERSION_NUM 1
pg_config.h:#define PG_VERSION "16.1"
pg_config.h:#define PG_VERSION_NUM 160001
pg_config.h:#define PG_VERSION_STR "PostgreSQL 16.1 on
x86_64-pc-linux-gnu, compiled by gcc (GCC) 12.2.1 20221121 (Red Hat
12.2.1-7), 64-bit"




Re: Trouble understanding how to avoid/manage ERROR: multixact "members" limit exceeded

2024-08-09 Thread Jim Vanns
To clear up any confusion, I tried the reductions as hinted in the
error message and after a few iterations still faced the same error.
Also, re. the sample configuration I provided - this is the result of
said iterations after initially using the stock values from the PGDG
RPM.

Jim

On Fri, 9 Aug 2024 at 11:26, Jim Vanns  wrote:
>
> Hi pggen community!
>
> I am struggling with this error almost daily now and despite various efforts, 
> not succeeding in avoiding or dealing with it;
>
> ERROR:  multixact "members" limit exceeded
> DETAIL:  This command would create a multixact with 2 members, but the 
> remaining space is only enough for 0 members.
> HINT:  Execute a database-wide VACUUM in database with OID 16467 with reduced 
> vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age 
> settings.
>
> Runtime details follow below, but before that; I am now seeing the above 
> error almost daily after approximately 12 hours of normal or expected 
> behaviour and throughput. Then it hits and all writes are blocked etc. and 
> the service is largely unusable/unable to recover. Restarting PG does allow 
> autovacuum processes to kick in with aggressive vacuuming to handle the 
> multixact freezing, but that isn't a suitable solution! Although having read 
> sources that now explain why multixact XIDs exist and when they're used, I am 
> not able to properly figure out how to plan for it or configure postgresql 
> appropriately to handle it given our workload.
>
> My questions are;
>
> 1) How should I be managing this? Although not new to PG, I am new to this 
> particular problem.
> 2) How can I confirm what processes/tables are contributing to this multixact 
> "members" limit?
> 3) What are the units for 
> vacuum_multixact_freeze_min_age/vacuum_multixact_freeze_table_age and how 
> should I be
> setting them appropriately for my rates etc.? I can't really find 
> anything that explains this clearly.
> 4) How can I check that autovacuum workers are specifically able to freeze 
> multixact XIDs and thus avoid this?
> 5) Can I check if autovacuum is actually succeeding in its work?
>
> Schema (brief):
> 10 tables
> 1 table (the 'hottest) partitioned by 2h ranges; so an extra 12x 
> partitions/relations a day, though only the most recent one might be 
> considered 'active'
>
> System (brief):
> PG: 15.5 w/ TimescaleDB 2.14 extension
> Write heavy workload;
> Mean Txn/s (low):   8k
> Mean Txn/s (high): 10k
> Mean rows/s: 100k
> Concurrency: 32 threads (local socket sessions) for 'upserts' via primary 
> service plus auxiliary processes (background workers/autovacuum workers etc.)
>
> Pattern (brief):
> COPY (binary) to session-local temporary tables (ON COMMIT DELETE)
> INSERT FROM tt TO main_table(s)
> UPDATE FROM tt TO main_table(s)
> VACUUM tt (every 100k txns)
>
> Config (excerpt):
> # - Transactions - (based on a period mean of ~8k txn/s)
> # See/ref;
> # www.postgresql.org/docs/15/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
> # 
> www.percona.com/blog/tuning-autovacuum-in-postgresql-and-autovacuum-internals
> # blog.sentry.io/transaction-id-wraparound-in-postgres
> # 
> https://blog.danslimmon.com/2023/12/11/concurrent-locks-and-multixacts-in-postgres
> # 
> https://pganalyze.com/blog/5mins-postgres-multiXact-ids-foreign-keys-performance
> # What unit is 'age' here? No. of. transactions?
> vacuum_freeze_min_age = 2880 # 1h @ 8k/s
> vacuum_multixact_freeze_min_age = 2880 # 1h @ 8k/s
> autovacuum_freeze_max_age = 15760 # 2h @ 8k/s
> autovacuum_multixact_freeze_max_age = 5760 # 2h @ 8k/s
> vacuum_multixact_freeze_table_age = 11520 # 4h @ 8k/s
>
> # We don't UPDATE or DELETE often; we never DELETE and only perform
> # UPDATE operations every 6h due to internal cache(s). So we set this to
> # zero so the formula doesn't consider it a major factor
> autovacuum_vacuum_threshold = 0 # In tuples/rows UPDATEd or DELETEd
> autovacuum_vacuum_insert_threshold = 5000 # In tuples/rows INSERTed
>
> autovacuum_vacuum_scale_factor = 0.1 # 10%
> autovacuum_analyze_scale_factor = 0.1 # 10%
> autovacuum_vacuum_insert_scale_factor = 0.1 # 10%
>
> autovacuum_naptime = 60
> autovacuum_max_workers = 8
>
> # Give autovacuum more credits to ensure a better chance at scanning
> autovacuum_vacuum_cost_limit = 2000 # 10x the default
>
> Thanks for your help and any guidance/knowledge you can share!
>
> Jim
>
> --
> Jim Vanns
> Principal Production Engineer
> Industrial Light & Magic, London



-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London




Re: Getting specific partition from the partition name

2024-08-09 Thread GF
On Fri, 9 Aug 2024 at 06:20, Ron Johnson  wrote:

>
> What if the partitions aren't all rationally named?  There *must* be a
> pg_* table out there which contains the partition boundaries...
>
>
The pg_class column relpartbound contains an internal representation of the
partition boundary, when applicable.
You can decompile it into the canonical text format with pg_get_expr( expr
pg_node_tree, relation oid [, pretty boolean ] ) → text.
So:
create table t(x int primary key) partition by list(x);
create table u partition of t for values in (0,1);
create table v partition of t for values in (2,3,4,5,6,7,8,9);
select oid::regclass,pg_get_expr(relpartbound,oid) from pg_class where
relkind='r' and relispartition;
 oid |  pg_get_expr
-+
 u   | FOR VALUES IN (0, 1)
 v   | FOR VALUES IN (2, 3, 4, 5, 6, 7, 8, 9)
(2 rows)

Best,
Giovanni


Re: Trouble understanding how to avoid/manage ERROR: multixact "members" limit exceeded

2024-08-09 Thread Adrian Klaver

On 8/9/24 03:26, Jim Vanns wrote:

Hi pggen community!

I am struggling with this error almost daily now and despite various 
efforts, not succeeding in avoiding or dealing with it;



My questions are;

1) How should I be managing this? Although not new to PG, I am new to 
this particular problem.
2) How can I confirm what processes/tables are contributing to this 
multixact "members" limit?
3) What are the units for 
vacuum_multixact_freeze_min_age/vacuum_multixact_freeze_table_age and 
how should I be
     setting them appropriately for my rates etc.? I can't really find 
anything that explains this clearly.
4) How can I check that autovacuum workers are specifically able to 
freeze multixact XIDs and thus avoid this?


pg_stat_activity

https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW

wait_event_type

wait_event

Table 28.12. Wait Events of Type LWLock

This might tell you if something is hanging up on a table or tables.


5) Can I check if autovacuum is actually succeeding in its work



pg_stat_all_tables

https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW

Will show you the vacuum/analyze that has been done on a table.



Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Column type modification in big tables

2024-08-09 Thread Greg Sabino Mullane
On Fri, Aug 9, 2024 at 6:39 AM Lok P  wrote:

> Thank you so much. Will definitely try to evaluate this approach. The Only
> concern I have is , as this data is moving downstream with exactly the same
> data type and length , so will it cause the downstream code to break while
> using this column in the join or filter criteria. Also I believe the
> optimizer won't be able to utilize this information while preparing the
> execution plan.
>

Yes, this is not as ideal as rewriting the table, but you asked for
another approaches :) As to the impact of your downstream stuff, I think
you have to try and see. Not clear what you mean by the optimizer, it's not
going to really care about numeric(10) versus numeric(8) or varchar(20) vs
varchar(2). It's possible the varchar -> numeric could cause issues, but
without real-world queries and data we cannot say.


>  Another thing , correct me if wrong, My understanding is  , if we want to
> run the "validate constraint" command after running this "check constraint
> with not valid" command, this will do a full table scan across all the
> partitions , but it's still beneficial as compared to updating the columns
> values for each rows. Correct me if I'm wrong.
>

Yes, it needs to scan the entire table, but it's a lightweight lock, won't
block concurrent access, will not need to detoast, and makes no table or
index updates. Versus an entire table rewrite which will do heavy locking,
take up tons of I/O, update all the indexes, and generate quite a lot of
WAL.

Cheers,
Greg


Re: Vacuum full connection exhaustion

2024-08-09 Thread Costa Alexoglou
On Fri, Aug 9, 2024 at 1:02 AM David Rowley  wrote:

> On Fri, 9 Aug 2024 at 02:12, Christophe Pettus  wrote:
> > VACUUM FULL takes an exclusive lock on the table that it is operating
> on.  It's possible that a connection becomes blocked on that exclusive lock
> waiting for the VACUUM FULL to finish, the application sees the connection
> stopped and fires up another one (this is common in container-based
> applications), that one blocks... until all of the connections are full of
> queries waiting on that VACUUM FULL.
>
> I also imagine this is the cause. One way to test would be to do:
> BEGIN; LOCK TABLE ; and see if the connections pile up
> in a similar way to when the VACUUM FULL command is used.
>
> David
>

Thanks folks. David really straight-forward way to test. I validated this,
when I lock the two tables involved in the benchmark the connections are
constantly growing until they reach the `max_connections`


Re: Getting specific partition from the partition name

2024-08-09 Thread Torsten Förtsch
If you want to convert your table name into a timestamp, you don't need
substring or similar. This also works:

=# select to_date('table_part_p2024_08_08', '"table_part_p""_"MM"_"DD');
  to_date

 2024-08-08
(1 row)

But as Greg said, your strings are perfectly sortable.


On Thu, Aug 8, 2024 at 9:52 PM veem v  wrote:

> Hi ,
> We are using postgres version 15.4. We have a range partition table and
> the partition naming convention is generated by pg_partman and is something
> like "table_name>_p_MM_DD".
>
> We have a requirement of extracting specific partitions ordered by the
> date criteria and also do some operations on that specific date. But I am
> struggling and it's not working as expected.I tried something as below but
> it's not working.Can somebody guide me here please.
>
>  to_date( substring('table_part_p2024_08_08' from
> '_p(\d{4})_(\d{2})_(\d{2})'),  '_MM_DD'
> ) < current_date
>
> or is there any ready-made data dictionary which will give us the order of
> the partitions by the date and we can get hold of the specific nth
> partition in that table?
>
> Regards
> Veem
>


Re: Column type modification in big tables

2024-08-09 Thread Lok P
On Fri, Aug 9, 2024 at 9:19 PM Greg Sabino Mullane 
wrote:

> On Fri, Aug 9, 2024 at 6:39 AM Lok P  wrote:
>
>> Thank you so much. Will definitely try to evaluate this approach. The
>> Only concern I have is , as this data is moving downstream with exactly the
>> same data type and length , so will it cause the downstream code to break
>> while using this column in the join or filter criteria. Also I believe the
>> optimizer won't be able to utilize this information while preparing the
>> execution plan.
>>
>
> Yes, this is not as ideal as rewriting the table, but you asked for
> another approaches :) As to the impact of your downstream stuff, I think
> you have to try and see. Not clear what you mean by the optimizer, it's not
> going to really care about numeric(10) versus numeric(8) or varchar(20) vs
> varchar(2). It's possible the varchar -> numeric could cause issues, but
> without real-world queries and data we cannot say.
>
>
>>  Another thing , correct me if wrong, My understanding is  , if we want
>> to run the "validate constraint" command after running this "check
>> constraint with not valid" command, this will do a full table scan across
>> all the partitions , but it's still beneficial as compared to updating the
>> columns values for each rows. Correct me if I'm wrong.
>>
>
> Yes, it needs to scan the entire table, but it's a lightweight lock, won't
> block concurrent access, will not need to detoast, and makes no table or
> index updates. Versus an entire table rewrite which will do heavy locking,
> take up tons of I/O, update all the indexes, and generate quite a lot of
> WAL.
>
>
Thank you so much Greg.

Considering the option, if we are able to get large down time to get this
activity done.

Some teammates suggested altering the column with "USING" Clause. I am not
really able to understand the difference,  also when i tested on a simple
table, it seems the "USING" clause takes more time as compared to normal
ALTER. But again I don't see any way to see the progress and estimated
completion time. Can you share your thoughts on this?

ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) USING
mycol::NUMERIC(3);
VS
ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) ;

*
Another thing also comes to my mind whether we should just create a new
partition table(say new_part_table) from scratch from the DDL of the
existing table(say old_part_table) and then load the data into it using
command (insert into new_part_table.. select..from old_part_table). Then
create indexes and constraints etc, something as below.

Will this approach be faster/better as compared to the simple "alter table
alter column approach" as above, considering we will have 4-6 hours of
downtime for altering three different columns on this ~5TB table?


*-- Steps*
Create table exactly same as existing partition table but with the modified
column types/lengths.

drop indexes ;  (Except PK and FK indexes may be..)
drop constraints;

insert into new_part_table (...) select (...) from old_part_table;

create indexes concurrently ;
create constraints; (But this table is also a child table to another
partition table, so creating the foreign key may be resource consuming here
too).

drop the old_part_table;
rename the new_part_table to old_part_table;
rename all the partitions;

VACUUM  old_part_table  ;
ANALYZE  old_part_table  ;


Insert works but fails for merge

2024-08-09 Thread yudhi s
Hello,
It's version 15.4 postgres. Where we have an insert working fine, but then
a similar insert with the same 'timestamp' value, when trying to be
executed through merge , it fails stating "You will need to rewrite or cast
the expression.". Why so?

*Example:-*
https://dbfiddle.uk/j5S7br-q

 CREATE TABLE tab1 (
id varchar(100) ,
mid INT,
txn_timestamp TIMESTAMPTZ NOT NULL,
cre_ts TIMESTAMPTZ NOT NULL
) PARTITION BY RANGE (txn_timestamp);

CREATE TABLE tab1_2024_08_09 PARTITION OF tab1
FOR VALUES FROM ('2024-08-09 00:00:00') TO ('2024-08-10 00:00:00');

-- Below insert works fine
INSERT INTO tab1
(id, mid, txn_timestamp, cre_ts)
VALUES
('5efd4c91-ef93-4477-840c-a723ae212d84', 123,
'2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z');

-- Below merge , which trying to insert similar row but failing

WITH source_data (id, mid, txn_timestamp, cre_ts) AS (
VALUES ('5efd4c91-ef93-4477-840c-a723ae212d66', 123,
'2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z')
)
INSERT INTO tab1 (id, mid, txn_timestamp, cre_ts)
SELECT id, mid, txn_timestamp, cre_ts
FROM source_data
ON CONFLICT (id) DO UPDATE
SETmid = EXCLUDED.mid,
txn_timestamp = EXCLUDED.txn_timestamp,
cre_ts = EXCLUDED.cre_ts;

ERROR: column "txn_timestamp" is of type timestamp with time zone but
expression is of type text LINE 24: SELECT id, mid, txn_timestamp, cre_ts ^
HINT: You will need to rewrite or cast the expression.


Re: Insert works but fails for merge

2024-08-09 Thread David G. Johnston
On Fri, Aug 9, 2024 at 2:14 PM yudhi s  wrote:

>
> Why so?
>

Because you stuck a CTE in between the column list of the insert - where
types are known - and the values command - where types are unknown since
you didn't specify them.  As the row comes out of the CTE every column must
have a known type, and so in the absence of context they get typed as text.

David J.


Re: Insert works but fails for merge

2024-08-09 Thread Adrian Klaver

On 8/9/24 14:13, yudhi s wrote:

Hello,
It's version 15.4 postgres. Where we have an insert working fine, but 
then a similar insert with the same 'timestamp' value, when trying to be 
executed through merge , it fails stating "You will need to rewrite or 
cast the expression.". Why so?


*Example:-*
https://dbfiddle.uk/j5S7br-q *
*

CREATE TABLE tab1 (
     id varchar(100) ,
     mid INT,
     txn_timestamp TIMESTAMPTZ NOT NULL,
     cre_ts TIMESTAMPTZ NOT NULL
) PARTITION BY RANGE (txn_timestamp);

CREATE TABLE tab1_2024_08_09 PARTITION OF tab1
     FOR VALUES FROM ('2024-08-09 00:00:00') TO ('2024-08-10 00:00:00');

-- Below insert works fine
INSERT INTO tab1
     (id, mid, txn_timestamp, cre_ts)
VALUES
     ('5efd4c91-ef93-4477-840c-a723ae212d84', 123, 
'2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z');


-- Below merge , which trying to insert similar row but failing

WITH source_data (id, mid, txn_timestamp, cre_ts) AS (
     VALUES ('5efd4c91-ef93-4477-840c-a723ae212d66', 123, 
'2024-08-09T11:33:49.402585600Z', '2024-08-09T11:33:49.402585600Z')

)
INSERT INTO tab1 (id, mid, txn_timestamp, cre_ts)
SELECT id, mid, txn_timestamp, cre_ts
FROM source_data
ON CONFLICT (id) DO UPDATE
SET    mid = EXCLUDED.mid,
     txn_timestamp = EXCLUDED.txn_timestamp,
     cre_ts = EXCLUDED.cre_ts;

ERROR: column "txn_timestamp" is of type timestamp with time zone but 
expression is of type text LINE 24: SELECT id, mid, txn_timestamp, 
cre_ts ^ HINT: You will need to rewrite or cast the expression.



VALUES:

https://www.postgresql.org/docs/current/sql-values.html

"When VALUES is used in INSERT, the values are all automatically coerced 
to the data type of the corresponding destination column. When it's used 
in other contexts, it might be necessary to specify the correct data 
type. If the entries are all quoted literal constants, coercing the 
first is sufficient to determine the assumed type for all:


SELECT * FROM machines
WHERE ip_address IN (VALUES('192.168.0.1'::inet), ('192.168.0.10'), 
('192.168.1.43'));

"

The VALUES is not directly attached to the INSERT, you will need to do 
explicit casts:


VALUES ('5efd4c91-ef93-4477-840c-a723ae212d66', 123, 
'2024-08-09T11:33:49.402585600Z'::timestamptz, 
'2024-08-09T11:33:49.402585600Z'::timestamptz)


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Getting specific partition from the partition name

2024-08-09 Thread Thiemo Kellner
Thanks. Nice one. Would not have thought to try.