Re: pgBackRest for a 50 TB database

2023-10-05 Thread Abhishek Bhola
Hi Stephen

Here is the update with compress-type=zst in the config file
Process-max is still 30. *But it longer than before, around 27 hours 50
mins*

full backup: 20231004-130621F
timestamp start/stop: 2023-10-04 13:06:21+09 / 2023-10-05
15:56:03+09
wal start/stop: 00010001AC0E0054 /
00010001AC0E0054
database size: 38249.0GB, database backup size: 38249.0GB
repo1: backup size: 5799.8GB

Do you think I could be missing something?

@Krishane

Let me try to answer the questions the best I can
1. The Connectivity protocol for DB is FC.
I cannot pinpoint the exact reason why it takes 26 hours. If I knew
exactly, I would have improved it myself.
I don't think 10 hours is even realistic, although if you can improve this
number, please let us know.

2. Yes, it is a dedicated DB server.

3. You're right, it is NAS

Thanks

On Wed, Oct 4, 2023 at 2:37 PM KK CHN  wrote:

> Greetings,
> Happy to hear you successfully performed pgBackRest for a 50TB DB. Out of
> curiosity I would like to know your infrastructure settings.
>
> 1. The  connectivity protocoal and bandwidth you used for your backend
> storage ?  Is it iSCSI, FC FCoE or GbE ? what's the exact reason for
> the 26 Hours it took in the best case ? What factors may reduce 26 Hours to
> much less time say 10 Hour or so for a 50 TB DB to  backup destination ??
> What to  fine tune or deploy  for a better performance?
>
> 2. It has been said that  you are running the DB on a 2 slot 18 core
> processor = 36 Physical cores ..  Is it a dedicated Server H/W entirely
> dedicated for a 50 TB database alone ?
> Why I asked, nowadays mostly we may run the DB servers on VMs in
> virtualized environments..  So I would like to know  all 36 Physical cores
> and associated RAM are all utilized by your 50 TB Database server ? or any
> vacant CPU cores/Free RAM on those server machines?
>
> 3.  What kind of connectivity/bandwidth between DB server and Storage
> backend you established ( I Want to know the server NIC card details,
> Connectivity Channel protocol/bandwidth and Connecting Switch spec from DB
> Server to Storage backend( NAS in this case right ?)
>
> Could you share the recommendations / details as in your case , Becoz I'm
> also in need to perform such a pgBackRest trial from a  production DB  to
> a  suitable Storage Device( Mostly Unified storage  DELL Unity)
>
> Any inputs are most welcome.
>
> Thanks,
> Krishane
>
> On Tue, Oct 3, 2023 at 12:14 PM Abhishek Bhola <
> abhishek.bh...@japannext.co.jp> wrote:
>
>> Hello,
>>
>> As said above, I tested pgBackRest on my bigger DB and here are the
>> results.
>> Server on which this is running has the following config:
>> Architecture:  x86_64
>> CPU op-mode(s):32-bit, 64-bit
>> Byte Order:Little Endian
>> CPU(s):36
>> On-line CPU(s) list:   0-35
>> Thread(s) per core:1
>> Core(s) per socket:18
>> Socket(s): 2
>> NUMA node(s):  2
>>
>> Data folder size: 52 TB (has some duplicate files since it is restored
>> from tapes)
>> Backup is being written on to DELL Storage, mounted on the server.
>>
>> pgbackrest.conf with following options enabled
>> repo1-block=y
>> repo1-bundle=y
>> start-fast=y
>>
>>
>> 1. *Using process-max: 30, Time taken: ~26 hours*
>> full backup: 20230926-092555F
>> timestamp start/stop: 2023-09-26 09:25:55+09 / 2023-09-27
>> 11:07:18+09
>> wal start/stop: 00010001AC0E0044 /
>> 00010001AC0E0044
>> database size: 38248.9GB, database backup size: 38248.9GB
>> repo1: backup size: 6222.0GB
>>
>> 2. *Using process-max: 10, Time taken: ~37 hours*
>>  full backup: 20230930-190002F
>> timestamp start/stop: 2023-09-30 19:00:02+09 / 2023-10-02
>> 08:01:20+09
>> wal start/stop: 00010001AC0E004E /
>> 00010001AC0E004E
>> database size: 38248.9GB, database backup size: 38248.9GB
>> repo1: backup size: 6222.0GB
>>
>> Hope it helps someone to use these numbers as some reference.
>>
>> Thanks
>>
>>
>> On Mon, Aug 28, 2023 at 12:30 AM Abhishek Bhola <
>> abhishek.bh...@japannext.co.jp> wrote:
>>
>>> Hi Stephen
>>>
>>> Thank you for the prompt response.
>>> Hearing it from you makes me more confident about rolling it to PROD.
>>> I will have a discussion with the network team once about and hear what
>>> they have to say and make an estimate accordingly.
>>>
>>> If you happen to know anyone using it with that size and having
>>> published their numbers, that would be great, but if not, I will post them
>>> once I set it up.
>>>
>>> Thanks for your help.
>>>
>>> Cheers,
>>> Abhishek
>>>
>>> On Mon, Aug 28, 2023 at 12:22 AM Stephen Frost 
>>> wrote:
>>>
 Greetings,

 * Abhishek Bhola (abhishek.bh...@japannext.co.jp) wrote:
 > I am trying to use pgBackRest for all my Postgres servers. I have
 tested it
 > on a sample database a

Re: Index scan is not pushed down to union all subquery

2023-10-05 Thread Marian Wendt

Hi,

Is it intended that indexes are not pushed down to union all
subqueries if even a single select contains a where clause? Is this
just not implemented, is it impossible to implement or am I doing
something wrong?

The following query does a SeqScan for "bikes" and "cars" tables even
though IndexScan on their column "dealer_name" would be a magnitude
faster. (Schema with sample data at the bottom of this mail)

WITH
   targets as (
 select 'bike' vehicle, id, dealer_name FROM bikes WHERE frame_size = 52
 union all
 select 'car' vehicle, id, dealer_name FROM cars
 -- In the real use case I have here dozens of tables
   )
SELECT
   dealers.name dealer,
   targets.vehicle,
   targets.id
FROM
   dealers
   JOIN targets
 ON dealers.name = targets.dealer_name
   WHERE dealers.id in (54,12,456,315,468)


If the WHERE clause from the "bikes" subquery is removed then first
Index Scan on dealers_pk is made and then Bitmap Index Scans on
"bikes" and "cars" table using indexes on "dealer_name" columns.


---
  -- Available also at:https://www.db-fiddle.com/f/cEXt8HXSaQzsQ2yBDA2Z4H/7
CREATE TABLE dealers AS
SELECT
   id,
   (SELECT string_agg(CHR(65+(random() * 25)::integer), '') FROM
generate_series(1, 4) WHERE id>0) name
FROM generate_series(1, 1000) AS id
;
ALTER TABLE dealers ADD primary key (id);
CREATE INDEX ON dealers(name);


CREATE TABLE bikes AS
SELECT
   generate_series AS id,
   (SELECT name FROM dealers WHERE dealers.id = (SELECT
(random()*1000)::int WHERE generate_series>0)) AS dealer_name,
   (random()*12+50)::int as frame_size
FROM generate_series(1, 10);
ALTER TABLE bikes ADD primary key (id);
CREATE INDEX ON bikes(dealer_name);


CREATE TABLE cars AS
SELECT
   generate_series as id,
   (SELECT name FROM dealers WHERE dealers.id = (SELECT
(random()*1000)::int WHERE generate_series>0)) AS dealer_name,
   (random()*7+14)::int as wheel_size
FROM generate_series(1, 10);
ALTER TABLE cars ADD primary key (id);
CREATE INDEX ON cars(dealer_name);


ANALYZE;


--
- Lauri




With an INNER JOIN, both tables must be fully checked/matched (check 
using EXPLAIN ANALYSIS -> HashJoin), so the index cannot be used here.

--
regards, marian wendt

Re: Index scan is not pushed down to union all subquery

2023-10-05 Thread Marian Wendt

Hi,

Is it intended that indexes are not pushed down to union all
subqueries if even a single select contains a where clause? Is this
just not implemented, is it impossible to implement or am I doing
something wrong?

The following query does a SeqScan for "bikes" and "cars" tables even
though IndexScan on their column "dealer_name" would be a magnitude
faster. (Schema with sample data at the bottom of this mail)

WITH
   targets as (
 select 'bike' vehicle, id, dealer_name FROM bikes WHERE frame_size = 52
 union all
 select 'car' vehicle, id, dealer_name FROM cars
 -- In the real use case I have here dozens of tables
   )
SELECT
   dealers.name dealer,
   targets.vehicle,
   targets.id
FROM
   dealers
   JOIN targets
 ON dealers.name = targets.dealer_name
   WHERE dealers.id in (54,12,456,315,468)


If the WHERE clause from the "bikes" subquery is removed then first
Index Scan on dealers_pk is made and then Bitmap Index Scans on
"bikes" and "cars" table using indexes on "dealer_name" columns.


---
  -- Available also at:https://www.db-fiddle.com/f/cEXt8HXSaQzsQ2yBDA2Z4H/7
CREATE TABLE dealers AS
SELECT
   id,
   (SELECT string_agg(CHR(65+(random() * 25)::integer), '') FROM
generate_series(1, 4) WHERE id>0) name
FROM generate_series(1, 1000) AS id
;
ALTER TABLE dealers ADD primary key (id);
CREATE INDEX ON dealers(name);


CREATE TABLE bikes AS
SELECT
   generate_series AS id,
   (SELECT name FROM dealers WHERE dealers.id = (SELECT
(random()*1000)::int WHERE generate_series>0)) AS dealer_name,
   (random()*12+50)::int as frame_size
FROM generate_series(1, 10);
ALTER TABLE bikes ADD primary key (id);
CREATE INDEX ON bikes(dealer_name);


CREATE TABLE cars AS
SELECT
   generate_series as id,
   (SELECT name FROM dealers WHERE dealers.id = (SELECT
(random()*1000)::int WHERE generate_series>0)) AS dealer_name,
   (random()*7+14)::int as wheel_size
FROM generate_series(1, 10);
ALTER TABLE cars ADD primary key (id);
CREATE INDEX ON cars(dealer_name);


ANALYZE;


--
- Lauri




With an INNER JOIN, both tables must be fully checked/matched (check 
using EXPLAIN ANALYSIS -> HashJoin), so the index cannot be used here.
Sorry, didn't consider the WITH part. Please share the detailed query 
plan for more info.

--
regards, marian wendt

Re: Index scan is not pushed down to union all subquery

2023-10-05 Thread Marian Wendt

Hi,

Is it intended that indexes are not pushed down to union all
subqueries if even a single select contains a where clause? Is this
just not implemented, is it impossible to implement or am I doing
something wrong?

The following query does a SeqScan for "bikes" and "cars" tables even
though IndexScan on their column "dealer_name" would be a magnitude
faster. (Schema with sample data at the bottom of this mail)

WITH
   targets as (
 select 'bike' vehicle, id, dealer_name FROM bikes WHERE frame_size = 52
 union all
 select 'car' vehicle, id, dealer_name FROM cars
 -- In the real use case I have here dozens of tables
   )
SELECT
   dealers.name dealer,
   targets.vehicle,
   targets.id
FROM
   dealers
   JOIN targets
 ON dealers.name = targets.dealer_name
   WHERE dealers.id in (54,12,456,315,468)


If the WHERE clause from the "bikes" subquery is removed then first
Index Scan on dealers_pk is made and then Bitmap Index Scans on
"bikes" and "cars" table using indexes on "dealer_name" columns.


---
  -- Available also at:https://www.db-fiddle.com/f/cEXt8HXSaQzsQ2yBDA2Z4H/7
CREATE TABLE dealers AS
SELECT
   id,
   (SELECT string_agg(CHR(65+(random() * 25)::integer), '') FROM
generate_series(1, 4) WHERE id>0) name
FROM generate_series(1, 1000) AS id
;
ALTER TABLE dealers ADD primary key (id);
CREATE INDEX ON dealers(name);


CREATE TABLE bikes AS
SELECT
   generate_series AS id,
   (SELECT name FROM dealers WHERE dealers.id = (SELECT
(random()*1000)::int WHERE generate_series>0)) AS dealer_name,
   (random()*12+50)::int as frame_size
FROM generate_series(1, 10);
ALTER TABLE bikes ADD primary key (id);
CREATE INDEX ON bikes(dealer_name);


CREATE TABLE cars AS
SELECT
   generate_series as id,
   (SELECT name FROM dealers WHERE dealers.id = (SELECT
(random()*1000)::int WHERE generate_series>0)) AS dealer_name,
   (random()*7+14)::int as wheel_size
FROM generate_series(1, 10);
ALTER TABLE cars ADD primary key (id);
CREATE INDEX ON cars(dealer_name);


ANALYZE;


--
- Lauri




With an INNER JOIN, both tables must be fully checked/matched (check 
using EXPLAIN ANALYSIS -> HashJoin), so the index cannot be used here.
Sorry, didn't consider the WITH part. Please share the detailed query 
plan for more info.
The "bikes" subquery uses field "frame_size" in WHERE clause but the 
field does not have an index...
ADD: Consider whether it might make sense to take a more generalist 
approach by only having one entity vehicle with the distinction "car", 
"bike", etc...?
ADD: Consider to do more complex "detailed" SELECTs that are unioned (if 
that is really needed)?

--
regards, marian wendt

Re: Multiple inserts with two levels of foreign keys

2023-10-05 Thread Alvaro Herrera
On 2023-Oct-04, Dow Drake wrote:

> I want to insert a farm record, then insert two crops associated with that
> farm, then insert two deliveries for each of the the two crops so that in
> the end, my tables look like this:

If I understand you correctly, for each table you want one CTE with the
data you want to insert, and another CTE with the data actually
inserted, that can be matched later.  Something like this should work:

with newfarms (name) as (values ('Happy Valley Farm')),
 insertedfarms (id, name) as (insert into farms (name)
 select newfarms.name
   from newfarms
  returning id, name),
 newcrops (farm, name) as (values ('Happy Valley Farm', 'corn'),
  ('Happy Valley Farm', 'wheat')),
 insertedcrops as (insert into crops (farm_id, name)
  select (select insertedfarms.id
from insertedfarms
   where insertedfarms.name = 
newcrops.farm),
 newcrops.name
from newcrops
   returning id, farm_id, name),
 newdeliveries (farm, name, ticket) as (values ('Happy Valley Farm', 
'corn', '3124'),
   ('Happy Valley Farm', 
'wheat', '3127'),
   ('Happy Valley Farm', 
'corn', '3133'),
   ('Happy Valley Farm', 
'wheat', '3140')),
 inserteddeliveries as (insert into deliveries (crop_id, ticket)
   select (select ics.id
 from insertedfarms ifs join 
insertedcrops ics on (ifs.id = ics.farm_id)
where ifs.name = newdeliveries.farm 
and
  ics.name = 
newdeliveries.name),
  ticket
 from newdeliveries
returning *)
select * from inserteddeliveries;


-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
 Are you not unsure you want to delete Firefox?
   [Not unsure] [Not not unsure][Cancel]
   http://smylers.hates-software.com/2008/01/03/566e45b2.html




Re: Index scan is not pushed down to union all subquery

2023-10-05 Thread Dominique Devienne
On Thu, Oct 5, 2023 at 11:35 AM Marian Wendt  wrote:

> With an INNER JOIN, both tables must be fully checked/matched (check using
> EXPLAIN ANALYSIS -> HashJoin), so the index cannot be used here.
> Sorry, didn't consider the WITH part. Please share the detailed query
> plan for more info.
>
> The "bikes" subquery uses field "frame_size" in WHERE clause but the field
> does not have an index...
>
> ADD: Consider whether it might make sense to take a more generalist
> approach by only having one entity vehicle with the distinction "car",
> "bike", etc...?
> ADD: Consider to do more complex "detailed" SELECTs that are unioned (if
> that is really needed)?
>

Marian, Lauri's question is clearly about the planner, and not asking about
writing the SQL differently, or changing the data model.
Her sample data puts a 1% chance of cars or bikes matching a dealer, so
using the indexes that exist should be preferred over a full scan.
She also implies that w/o the WHERE clause in the CTE's union-all query,
the outer JOIN-clause would be pushed down (seems to me),
resulting in likely using the indexes. Lauri, you haven't said which
version of PostgreSQL. Did you assume the latest v16? My $0.02. --DD


Re: Gradual migration from integer to bigint?

2023-10-05 Thread Nick Cleaton
On Sat, 30 Sept 2023, 23:37 Tom Lane,  wrote:

>
> I think what you're asking for is a scheme whereby some rows in a
> table have datatype X in a particular column while other rows in
> the very same physical table have datatype Y in the same column.
>

An alternative for NOT NULL columns would be to use a new attnum for the
bigint version of the id, but add a column to pg_attribute allowing linking
the new id col to the dropped old id col, to avoid the table rewrite.

Global read code change needed: on finding a NULL in a NOT NULL column,
check for a link to a dropped old col and use that value instead if found.
The check could be almost free in the normal case if there's already a
check for unexpected NULL or tuple too short.

Then a metadata-only operation can create the new id col and drop and
rename and link the old id col, and fix up fkeys etc for the attnum change.

Indexes are an issue. Require the in-advance creation of indexes like
btree(id::bigint) mirroring every index involving id maybe ? Those could
then be swapped in as part of the same metadata operation.


Re: pgBackRest for a 50 TB database

2023-10-05 Thread Stephen Frost
Greetings,

On Thu, Oct 5, 2023 at 03:10 Abhishek Bhola 
wrote:

> Here is the update with compress-type=zst in the config file
> Process-max is still 30. *But it longer than before, around 27 hours 50
> mins*
>
> full backup: 20231004-130621F
> timestamp start/stop: 2023-10-04 13:06:21+09 / 2023-10-05
> 15:56:03+09
> wal start/stop: 00010001AC0E0054 /
> 00010001AC0E0054
> database size: 38249.0GB, database backup size: 38249.0GB
> repo1: backup size: 5799.8GB
>
> Do you think I could be missing something?
>

Sounds like there’s something else which is the bottleneck once you have
process-max at 30. I suspect you could reduce that process-max value and
have around the same time still with zstd.  Ultimately if you want it to be
faster then you’ll need to figure out what the bottleneck is (seemingly not
CPU, unlikely to be memory, so that leaves network or storage) and address
that.

We’ve seen numbers approaching 10TB/hr with lots of processes and zstd and
fast storage on high end physical hardware.

Thanks,

Stephen


Re: Index scan is not pushed down to union all subquery

2023-10-05 Thread Lauri Kajan
On Thu, Oct 5, 2023 at 12:30 PM Marian Wendt  wrote:
>
> With an INNER JOIN, both tables must be fully checked/matched (check
using EXPLAIN ANALYSIS -> HashJoin), so the index cannot be used here.
> Sorry, didn't consider the WITH part. Please share the detailed query
plan for more info.

Now, it reads all the 100 000 bikes, filters out 91 778 rows then appends
all the 100 000 cars. Then it uses a hash join to match these 108 222 rows
to 5 dealers.

In my opinion the index could be used. What I would do if I were a
database, I would first find names of the dealers (in this case 5 dealers
out of 1000) using the dealer's primary key then find all the bikes that
match with the dealer name using the index on dealer_name column (in sample
data ~500 bikes out of 100 000). Then filter those bikes with the
frame_size (filter out ~460 bikes). Append to this result set all the cars
matching the dealer_name condition (~500 cars out of 100 000) (again find
using the index on the dealer_name column).


Here's the current query plan:

Hash Join  (cost=21.53..4511.74 rows=542 width=41) (actual
time=0.233..27.507 rows=479 loops=1)
  Hash Cond: (bikes.dealer_name = dealers.name)
  ->  Append  (cost=0.00..3943.40 rows=108280 width=41) (actual
time=0.009..20.351 rows=108222 loops=1)
->  Seq Scan on bikes  (cost=0.00..1826.00 rows=8280 width=41)
(actual time=0.009..6.074 rows=8222 loops=1)
  Filter: (frame_size = 52)
  Rows Removed by Filter: 91778
->  Seq Scan on cars  (cost=0.00..1576.00 rows=10 width=41)
(actual time=0.011..9.175 rows=10 loops=1)
  ->  Hash  (cost=21.46..21.46 rows=5 width=5) (actual time=0.041..0.043
rows=5 loops=1)
Buckets: 1024  Batches: 1  Memory Usage: 9kB
->  Index Scan using dealers_pkey on dealers  (cost=0.28..21.46
rows=5 width=5) (actual time=0.009..0.038 rows=5 loops=1)
  Index Cond: (id = ANY ('{54,12,456,315,468}'::integer[]))
Planning Time: 0.152 ms
Execution Time: 27.558 ms

In my dreams the plan would be something like this:
Nested Loop
  ->  Index Scan using dealers_pkey on dealers
Index Cond: (id = ANY ('{54,12,456,315,468}'::integer[]))
  ->  Append
->  Bitmap Heap Scan on bikes
  Recheck Cond: (dealer_name = dealers.name)
  Filter: (frame_size = 52)
  Rows Removed by Filter: 91
  ->  Bitmap Index Scan on bikes_dealer_name_idx
Index Cond: (dealer_name = dealers.name)
->  Bitmap Heap Scan on cars
  Recheck Cond: (dealer_name = dealers.name)
  ->  Bitmap Index Scan on cars_dealer_name_idx
Index Cond: (dealer_name = dealers.name)

If I don't add the WHERE condition for bikes I get exactly that but without
the filter node. Without the frame_size filter the Execution Time is 1.028
ms

On Thu, Oct 5, 2023 at 12:38 PM Marian Wendt  wrote:
>
> The "bikes" subquery uses field "frame_size" in WHERE clause but the
field does not have an index...
> ADD: Consider whether it might make sense to take a more generalist
approach by only having one entity vehicle with the distinction "car",
"bike", etc...?
> ADD: Consider to do more complex "detailed" SELECTs that are unioned (if
that is really needed)?

I don't want to index frame size since I don't use it to look up rows by
itself. It should only be used to filter only interested bikes.
This is just a sample case and in my real use case I want to validate my
"source" objects against 80 "target" tables. I could have separate queries
for each target table but that adds overhead in network latencies etc. if
having over 80 separate queries.
And if doing the join separately for each target table and then joining the
results together that is also slow since it accesses the delalers_pk index
that 80 times.


On Thu, Oct 5, 2023 at 12:47 PM Dominique Devienne 
wrote:
>
> ...
> Lauri, you haven't said which version of PostgreSQL. Did you assume the
latest v16?

Thanks Dominique for clarifying the question.
I have tested this on Postgres 12 and also with the latest 16.0


Re: Multiple inserts with two levels of foreign keys

2023-10-05 Thread Dow Drake
Yes!  Thanks, Alvaro!  This is exactly the pattern I was trying to work out!  
This community is awesome!

> On Oct 5, 2023, at 2:39 AM, Alvaro Herrera  wrote:
> 
> On 2023-Oct-04, Dow Drake wrote:
> 
>> I want to insert a farm record, then insert two crops associated with that
>> farm, then insert two deliveries for each of the the two crops so that in
>> the end, my tables look like this:
> 
> If I understand you correctly, for each table you want one CTE with the
> data you want to insert, and another CTE with the data actually
> inserted, that can be matched later.  Something like this should work:
> 
> with newfarms (name) as (values ('Happy Valley Farm')),
> insertedfarms (id, name) as (insert into farms (name)
> select newfarms.name
>   from newfarms
>  returning id, name),
> newcrops (farm, name) as (values ('Happy Valley Farm', 'corn'),
>  ('Happy Valley Farm', 'wheat')),
> insertedcrops as (insert into crops (farm_id, name)
>  select (select insertedfarms.id
>from insertedfarms
>   where insertedfarms.name = 
> newcrops.farm),
> newcrops.name
>from newcrops
>   returning id, farm_id, name),
> newdeliveries (farm, name, ticket) as (values ('Happy Valley Farm', 
> 'corn', '3124'),
>   ('Happy Valley Farm', 
> 'wheat', '3127'),
>   ('Happy Valley Farm', 
> 'corn', '3133'),
>   ('Happy Valley Farm', 
> 'wheat', '3140')),
> inserteddeliveries as (insert into deliveries (crop_id, ticket)
>   select (select ics.id
> from insertedfarms ifs join 
> insertedcrops ics on (ifs.id = ics.farm_id)
>where ifs.name = 
> newdeliveries.farm and
>  ics.name = 
> newdeliveries.name),
>  ticket
> from newdeliveries
>returning *)
> select * from inserteddeliveries;
> 
> 
> -- 
> Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
> Are you not unsure you want to delete Firefox?
>   [Not unsure] [Not not unsure][Cancel]
>   http://smylers.hates-software.com/2008/01/03/566e45b2.html




How to make a map in pg kernel?

2023-10-05 Thread jacktby jacktby
Hi, I’m writing some kernel codes in pg15, and now I need to make a map struct, 
I know c-lang doesn’t support this, so does pg support an internal struct? 
Hopefully your replies.



Ask about Foreign Table Plug-in on Windows Server.

2023-10-05 Thread Anuwat Sagulmontreechai
Hi PG Support Team,

Our customer install PG on Windows Server 2022 and need to use Foreign Table to 
connect to SQL Server on WS2022 too, so we tried to find the plug-in but 
unfortunately we found only Linux version. Could you please advise us?

Thank you in advance for your help.

Best Regards,
Anuwat Sagulmontreechai (Max)
Soft Square Group of Companies
Tel. 02-997-2000 ext.2305



Re: Multiple inserts with two levels of foreign keys

2023-10-05 Thread Ron
But honestly, the amount of text duplication hurts my "inner programmer".  
And it would have to be generated dynamically, since you don't know how many 
crops were delivered.  #shudder


On 10/5/23 09:33, Dow Drake wrote:

Yes!  Thanks, Alvaro!  This is exactly the pattern I was trying to work out!  
This community is awesome!


On Oct 5, 2023, at 2:39 AM, Alvaro Herrera  wrote:

On 2023-Oct-04, Dow Drake wrote:


I want to insert a farm record, then insert two crops associated with that
farm, then insert two deliveries for each of the the two crops so that in
the end, my tables look like this:

If I understand you correctly, for each table you want one CTE with the
data you want to insert, and another CTE with the data actually
inserted, that can be matched later.  Something like this should work:

with newfarms (name) as (values ('Happy Valley Farm')),
 insertedfarms (id, name) as (insert into farms (name)
 select newfarms.name
   from newfarms
  returning id, name),
 newcrops (farm, name) as (values ('Happy Valley Farm', 'corn'),
  ('Happy Valley Farm', 'wheat')),
 insertedcrops as (insert into crops (farm_id, name)
  select (select insertedfarms.id
from insertedfarms
   where insertedfarms.name = 
newcrops.farm),
 newcrops.name
from newcrops
   returning id, farm_id, name),
 newdeliveries (farm, name, ticket) as (values ('Happy Valley Farm', 
'corn', '3124'),
   ('Happy Valley Farm', 
'wheat', '3127'),
   ('Happy Valley Farm', 
'corn', '3133'),
   ('Happy Valley Farm', 
'wheat', '3140')),
 inserteddeliveries as (insert into deliveries (crop_id, ticket)
   select (select ics.id
 from insertedfarms ifs join 
insertedcrops ics on (ifs.id = ics.farm_id)
where ifs.name = newdeliveries.farm 
and
  ics.name = 
newdeliveries.name),
  ticket
 from newdeliveries
returning *)
select * from inserteddeliveries;


--
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
Are you not unsure you want to delete Firefox?
   [Not unsure] [Not not unsure][Cancel]
   http://smylers.hates-software.com/2008/01/03/566e45b2.html




--
Born in Arizona, moved to Babylonia.




Re: Ask about Foreign Table Plug-in on Windows Server.

2023-10-05 Thread Laurenz Albe
On Thu, 2023-10-05 at 10:39 +, Anuwat Sagulmontreechai wrote:
> Our customer install PG on Windows Server 2022 and need to use Foreign Table 
> to connect
> to SQL Server on WS2022 too, so we tried to find the plug-in but 
> unfortunately we found
> only Linux version. Could you please advise us?

There are no official releases of tds_fdw for Windows.  But in this issue
(https://github.com/tds-fdw/tds_fdw/issues/53) somebody claims to have build the
extension on Windows.  You could try the same.

Yours,
Laurenz Albe





No yum repo for CentOS 7 and postgres 16?

2023-10-05 Thread Jeff Ross

Hi all,

CentOS 7 isn't quite dead yet but it appears that CentOS7 is not 
included in the new yum repo file including 16.


Here's a bit from the latest repo file:

Name    : pgdg-redhat-repo
Version : 42.0
Release : 35PGDG
Architecture: noarch
Install Date: (not installed)
Group   : Unspecified
Size    : 15459
License : PostgreSQL
Signature   : DSA/SHA1, Thu 14 Sep 2023 06:39:48 AM MDT, Key ID 
1f16d2e1442df0f8

Source RPM  : pgdg-redhat-repo-42.0-35PGDG.src.rpm
Build Date  : Thu 14 Sep 2023 06:39:32 AM MDT
Build Host  : koji-rhel-9-x86-64-pgbuild
Vendor  : PostgreSQL Global Development Group
URL : https://yum.postgresql.org
Summary : PostgreSQL PGDG RPMs- Yum Repository Configuration for Red 
Hat / Rocky / CentOS

Description :
This package contains yum configuration for Red Hat Enterprise Linux, 
CentOS,

and also the GPG key for PGDG RPMs.
* Tue Sep 12 2023 Devrim Gündüz  - 42.0-35PGDG
- Add v16 repos
- Remove v16 repos from RHEL 7

Really?  Might one inquire as to why? Yes, CentOS 7 is headed for EOL 
but not until June 30, 2024.


Do those of us still on CentOS 7 wanting to upgrade to 16 now have to 
build from source?


Jeff Ross


No yum repo for CentOS 7 and postgres 16?

2023-10-05 Thread David G. Johnston
On Thursday, October 5, 2023, Jeff Ross  wrote:

> Hi all,
>
> CentOS 7 isn't quite dead yet but it appears that CentOS7 is not included
> in the new yum repo file including 16.
>
> Do those of us still on CentOS 7 wanting to upgrade to 16 now have to
> build from source?
>
>
 https://yum.postgresql.org/news/rhel7-postgresql-rpms-end-of-life/

David J.


Re: No yum repo for CentOS 7 and postgres 16?

2023-10-05 Thread Jeff Ross


On 10/5/23 15:46, David G. Johnston wrote:

On Thursday, October 5, 2023, Jeff Ross  wrote:

Hi all,

CentOS 7 isn't quite dead yet but it appears that CentOS7 is not
included in the new yum repo file including 16.

Do those of us still on CentOS 7 wanting to upgrade to 16 now have
to build from source?


https://yum.postgresql.org/news/rhel7-postgresql-rpms-end-of-life/ 



David J.

Thank you David.  I looked through the postgres mailing lists and didn't 
see a yum specific one.


I'll re-direct this to that list.

Jeff R.