Re: Foreign Data Wrapper from Oracle to Postgres 16

2025-07-05 Thread DINESH NAIR
Hi,

We found a link that may help resolve the issue you encountered while creating 
the oracle_fdw extension on windows machine.

laurenz/oracle_fdw: PostgreSQL Foreign Data Wrapper for 
Oracle
[https://opengraph.githubassets.com/5ddf3cfa010c4132953bb148ce5b719f099b54771a4898555db8946528da016a/laurenz/oracle_fdw]
GitHub - laurenz/oracle_fdw: PostgreSQL Foreign Data Wrapper for 
Oracle
PostgreSQL Foreign Data Wrapper for Oracle. Contribute to laurenz/oracle_fdw 
development by creating an account on GitHub.
github.com




Thanks

Dinesh Nair



From: Laurenz Albe 
Sent: Thursday, July 3, 2025 12:20 PM
To: Santhosh S ; pgsql-gene...@postgresql.org 
; pgsql-nov...@postgresql.org 

Subject: Re: Foreign Data Wrapper from Oracle to Postgres 16

Caution: This email was sent from an external source. Please verify the 
sender’s identity before clicking links or opening attachments.

On Wed, 2025-07-02 at 23:58 +0530, Santhosh S wrote:
> I am working on a project along with my peers on developing an Foreign Data 
> Wrapper
> to transfer data from Oracle to Postgres 16. We followed the below steps in 
> order:
>
> 1. Developed the Foreign Data Wrapper (64-bit) using Microsoft Visual Studio 
> to transfer from Oracle to Postgres 16
> 2. Installed Oracle Instant Client 64-bit versionand InstantClient Path has 
> been set in the environment variables
> 3. Have Postgres 16 64-bit version installed
> 4. Placed all the files from each folder of the downloaded ORACLE_FDW package 
> should
>be copied into the respective folders of PostgreSQL Installation directory
> 5. “oci.dll” from the Oracle Instant Client Installation directory to 
> PostgreSQL Installation directory
> 6. Visual C++ redistributable is installed
>
> After the above steps when we try to execute the below statement in Postgres 
> 16
>
> CREATE EXTENSION IF NOT EXISTS oracle_fdw
> SCHEMA public
> VERSION "1.2"
>
> we get the error "SQL Error [58P01]: ERROR: could not load library 
> "C:/Program Files/PostgreSQL/16/lib/oracle_fdw.dll": The specified module 
> could not be found.
> Error position"
>
> But we are able to execute the above command successfully in Postgres 13 and 
> successfully transfer data from Oracle to Postgres 13.
>
> Any help or direction would be greatly helpful.

This is better tracked here: 
https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Flaurenz%2Foracle_fdw%2Fissues%2F754&data=05%7C02%7Cdinesh_nair%40iitmpravartak.net%7C0b83211e99d44b76a1ce08ddba4ddf83%7C3e964837c2384683915549f4ec04f8e9%7C0%7C0%7C638871565825604004%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=hwVQ5nQaRomTkOBz49RpAW4k8YA1gbmANAXRRZrFCe8%3D&reserved=0

By the way, I took a double take when I read your report.
For me "developing" a FDW means writing the code, whereas you are clearly
talking about what I would call "building", "compiling" or "instaling"
the FDW.  No problem, I just want to avoid confusion.

Yours,
Laurenz Albe




Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread pf
On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:

>On 7/5/25 09:52, Pierre Fortin wrote:

>> Wanting to upgrade from:
>> PostgreSQL 15.13 on x86_64-mageia-linux-gnu,
>> compiled by gcc (Mageia 15.1.0-1.mga10) 15.1.0, 64-bit
>> to:
>> PG 17.5
>> 
>> Way back, I was able to use -k|--link option on pg_upgrade (PG13 to PG15);
>> but since then:
>> 
>> - my DB has grown to over 8TB  
>
>How did you measure above?

# du -sb /var/lib/pgsql/data
8227910662297   /var/lib/pgsql/data

>> - even with ~70TB, I don't have enough contiguous disk space to
>>dump/restore  
>
>What was the pg_dump command?

Didn't try given:
$ df /mnt/db
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdh117T   13T  3.0T  82% /mnt/db

I suppose I could dump each of the 1408 objects to various available
drives; but given my previous experience with PG13 to PG15 using --link
which took seconds; I'm hoping to avoid wasting time (at my age, hours
matter).

Cheers,
Pierre




Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Adrian Klaver

On 7/5/25 09:52, Pierre Fortin wrote:

Hi,

[Hope this gets through after dumping DKIM-ignorant mail provider.]

Wanting to upgrade from:
PostgreSQL 15.13 on x86_64-mageia-linux-gnu,
compiled by gcc (Mageia 15.1.0-1.mga10) 15.1.0, 64-bit
to:
PG 17.5

Way back, I was able to use -k|--link option on pg_upgrade (PG13 to PG15);
but since then:

- my DB has grown to over 8TB


How did you measure above?


- even with ~70TB, I don't have enough contiguous disk space to
   dump/restore


What was the pg_dump command?



Thanks,
Pierre









--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread pf
On Sat, 05 Jul 2025 13:04:55 -0400 Tom Lane wrote:

>You cannot do pg_upgrade without a copy of the old postgres
>server binary as well as the new one.

Bummer.  Wish I had skills & time to try to overcome this...

Thanks!
Pierre




Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Ron Johnson
On Sat, Jul 5, 2025 at 2:11 PM Adrian Klaver 
wrote:

> On 7/5/25 09:52, Pierre Fortin wrote:
> > Hi,
> >
> > [Hope this gets through after dumping DKIM-ignorant mail provider.]
> >
> > Wanting to upgrade from:
> > PostgreSQL 15.13 on x86_64-mageia-linux-gnu,
> > compiled by gcc (Mageia 15.1.0-1.mga10) 15.1.0, 64-bit
> > to:
> > PG 17.5
> >
> > Way back, I was able to use -k|--link option on pg_upgrade (PG13 to
> PG15);
> > but since then:
> >
> > - my DB has grown to over 8TB
>
> How did you measure above?
>
> > - even with ~70TB, I don't have enough contiguous disk space to
> >dump/restore
>
> What was the pg_dump command?


For something that big, he must have been doing an uncompressed plain
format dump instead of a directory/custom format dump.  Maybe even added
--attribute-inserts too.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Tom Lane
p...@pfortin.com writes:
> On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:
>> How did you measure above?

> # du -sb /var/lib/pgsql/data
> 8227910662297   /var/lib/pgsql/data

It's likely that there's a deal of bloat in that.  Even if there's not
much bloat, this number will include indexes and WAL data that don't
appear in pg_dump output.

>> What was the pg_dump command?

> Didn't try given:
> $ df /mnt/db
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sdh117T   13T  3.0T  82% /mnt/db

I'd say give it a try; be sure to use one of the pg_dump modes
that compress the data.

regards, tom lane




pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Pierre Fortin
Hi,

[Hope this gets through after dumping DKIM-ignorant mail provider.]

Wanting to upgrade from:
PostgreSQL 15.13 on x86_64-mageia-linux-gnu, 
compiled by gcc (Mageia 15.1.0-1.mga10) 15.1.0, 64-bit
to:
PG 17.5 

Way back, I was able to use -k|--link option on pg_upgrade (PG13 to PG15);
but since then:

- my DB has grown to over 8TB
- even with ~70TB, I don't have enough contiguous disk space to
  dump/restore
- my Linux distro (Mageia) is not setup to handle multiple versions of
  postgres (installing 17.5 removes 15.13). Worse, it failed to install
  part of the module when it saw /var/lib/pgsql/data still there:
  https://bugs.mageia.org/show_bug.cgi?id=34306

I've glanced at the pg_upgrade source code (my C skills are ancient) and
it appears pg_upgrade is virtually the same from 15.13 to 17.5.

My question:  did I miss anything, or would:
$ pg_upgrade -d data15 -D data17 -k
suffice?

Besides not noticing significant difference between the two versions, the
docs at https://www.postgresql.org/docs/current/pgupgrade.html 
contain: "default is the directory where pg_upgrade resides" 

If new pg_upgrade is the only binary, will both -b and -B default to it?
Maybe at minimum I may need to specify:
$ pg_upgrade -b /usr/bin -d data15 -D data17 -k
?

Thanks,
Pierre









Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Tom Lane
Pierre Fortin  writes:
> - my Linux distro (Mageia) is not setup to handle multiple versions of
>   postgres (installing 17.5 removes 15.13).

Ugh.  You cannot do pg_upgrade without a copy of the old postgres
server binary as well as the new one.  pg_upgrade by itself is not
capable of accessing either set of catalogs.

Way back when I was packaging PG for Red Hat, they didn't support
multiple concurrently-installed package versions either, so what I did
was to provide an auxiliary pg_upgrade package that contained an old
server binary as well as pg_upgrade itself.  Perhaps Mageia has done
something similar, or could be cajoled to once you point out that
their packaging makes it impossible to do an upgrade.

If that path yields no joy, you'll need to use a hand-built copy of
one PG version or the other while performing the upgrade.  Might want
to think about migrating to some less PG-unfriendly distro while
you are at it.

regards, tom lane




Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread David G. Johnston
On Sat, Jul 5, 2025 at 9:52 AM Pierre Fortin  wrote:

> If new pg_upgrade is the only binary, will both -b and -B default to it?
> Maybe at minimum I may need to specify:
> $ pg_upgrade -b /usr/bin -d data15 -D data17 -k
>
>
 pgsql/pgsql-18/bin
> ./pg_ctl -D /var/pgsql/postgres-17 start
waiting for server to start2025-07-05 16:58:56.559 UTC [293839] FATAL:
 database files are incompatible with server
2025-07-05 16:58:56.559 UTC [293839] DETAIL:  The data directory was
initialized by PostgreSQL version 17, which is not compatible with this
version 18beta1.
pg_ctl: control file appears to be corrupt


David J.


Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Ron Johnson
On Sat, Jul 5, 2025 at 2:24 PM  wrote:

> On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:
>
> >On 7/5/25 09:52, Pierre Fortin wrote:
>
> >> Wanting to upgrade from:
> >> PostgreSQL 15.13 on x86_64-mageia-linux-gnu,
> >> compiled by gcc (Mageia 15.1.0-1.mga10) 15.1.0, 64-bit
> >> to:
> >> PG 17.5
> >>
> >> Way back, I was able to use -k|--link option on pg_upgrade (PG13 to
> PG15);
> >> but since then:
> >>
> >> - my DB has grown to over 8TB
> >
> >How did you measure above?
>
> # du -sb /var/lib/pgsql/data
> 8227910662297   /var/lib/pgsql/data
>
> >> - even with ~70TB, I don't have enough contiguous disk space to
> >>dump/restore
> >
> >What was the pg_dump command?
>
> Didn't try given:
> $ df /mnt/db
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sdh117T   13T  3.0T  82% /mnt/db
>
> I suppose I could dump each of the 1408 objects to various available
> drives; but given my previous experience with PG13 to PG15 using --link
> which took seconds; I'm hoping to avoid wasting time (at my age, hours
> matter).
>

There's something you're not telling us.  The whole point of "pg_upgrade
--link" is an in-place upgrade. It might use a few extra GB of disk space
for when it backs up the PG15 schema and restores it to PG17.  Thus, why
are you complaining about running out of disk space?

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Pierre Fortin
On Sat, 05 Jul 2025 14:30:20 -0400 Tom Lane wrote:

Forgive my ignorance; always trying to learn more... :)

>p...@pfortin.com writes:
>> On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:  
>>> How did you measure above?  
>
>> # du -sb /var/lib/pgsql/data
>> 8227910662297   /var/lib/pgsql/data  
>
>It's likely that there's a deal of bloat in that.  Even if there's not
>much bloat, this number will include indexes and WAL data that don't
>appear in pg_dump output.

Does this imply that on restore, I'll have to re-index everything?

>>> What was the pg_dump command?  
>
>> Didn't try given:
>> $ df /mnt/db
>> Filesystem  Size  Used Avail Use% Mounted on
>> /dev/sdh117T   13T  3.0T  82% /mnt/db  
>
>I'd say give it a try; be sure to use one of the pg_dump modes
>that compress the data.

OK...  I failed to mention I have several databases in this cluster; so
digging into pg_dumpall, I see:
   --binary-upgrade
This option is for use by in-place upgrade utilities. Its use for
other purposes is not recommended or supported. The behavior of the
option may change in future releases without notice.

pg_upgrade has --link option; but I'm puzzled by this option in a
dumpall/restore process. My imagination wonders if this alludes to a way
to do something like:
 pg_dumpall --globals-only --roles-only --schema-only ...
Would restoring this be a way to update only the control structures? Big
assumption that the actual data remains untouched...

Inquiring mind...  :)

Back to my upgrade issue...  
All my DBs are static (only queries once loaded). Assuming the dumpall
file fits on one of my drives:
 pg_dumpall -f /PG.backup -v 
appears to be all I need? pg_dump has compression by default; but I don't
see compression with dumpall other than for TOAST. 

Thanks, You guys are awesome!
 
>   regards, tom lane




Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Ron Johnson
On Sat, Jul 5, 2025 at 3:19 PM Pierre Fortin  wrote:

> On Sat, 05 Jul 2025 14:30:20 -0400 Tom Lane wrote:
>
> Forgive my ignorance; always trying to learn more... :)
>
> >p...@pfortin.com writes:
> >> On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:
> >>> How did you measure above?
> >
> >> # du -sb /var/lib/pgsql/data
> >> 8227910662297   /var/lib/pgsql/data
> >
> >It's likely that there's a deal of bloat in that.  Even if there's not
> >much bloat, this number will include indexes and WAL data that don't
> >appear in pg_dump output.
>
> Does this imply that on restore, I'll have to re-index everything?
>
> >>> What was the pg_dump command?
> >
> >> Didn't try given:
> >> $ df /mnt/db
> >> Filesystem  Size  Used Avail Use% Mounted on
> >> /dev/sdh117T   13T  3.0T  82% /mnt/db
> >
> >I'd say give it a try; be sure to use one of the pg_dump modes
> >that compress the data.
>
> OK...  I failed to mention I have several databases in this cluster; so
> digging into pg_dumpall, I see:
>--binary-upgrade
> This option is for use by in-place upgrade utilities. Its use for
> other purposes is not recommended or supported. The behavior of the
> option may change in future releases without notice.
>
> pg_upgrade has --link option; but I'm puzzled by this option in a
> dumpall/restore process.


It's _not_ part of a dumpall/restore process.

You _either_ run
- pg_upgrade --link
  OR
- pg_dumpall --globals-only > globals.sql / psql -f globals.sql
- pg_dump --format=directory / pg_restore --format=directory of db1
- pg_dump --format=directory / pg_restore --format=directory of db2
- pg_dump --format=directory / pg_restore --format=directory of db3
- pg_dump --format=directory / pg_restore --format=directory of etc...

Why not a plain pg_dumpall of the whole instance?  Because that would
create a GINORMOUS text file which can only be loaded in a single-threaded
manner.


> My imagination wonders if this alludes to a way
> to do something like:
>  pg_dumpall --globals-only --roles-only --schema-only ...
> Would restoring this be a way to update only the control structures? Big
> assumption that the actual data remains untouched...
>
> Inquiring mind...  :)
>
> Back to my upgrade issue...
> All my DBs are static (only queries once loaded). Assuming the dumpall
> file fits on one of my drives:
>  pg_dumpall -f /PG.backup -v
> appears to be all I need? pg_dump has compression by default; but I don't
> see compression with dumpall other than for TOAST.
>
> Thanks, You guys are awesome!
>
> >   regards, tom lane
>
>
>

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Tom Lane
Pierre Fortin  writes:
> OK...  I failed to mention I have several databases in this cluster; so
> digging into pg_dumpall, I see:
>--binary-upgrade
> This option is for use by in-place upgrade utilities. Its use for
> other purposes is not recommended or supported. The behavior of the
> option may change in future releases without notice.

That is infrastructure for pg_upgrade to use.  Do not try to use it
manually; it won't end well.

> All my DBs are static (only queries once loaded). Assuming the dumpall
> file fits on one of my drives:
>  pg_dumpall -f /PG.backup -v 
> appears to be all I need? pg_dump has compression by default; but I don't
> see compression with dumpall other than for TOAST.

I would try that first before messing with compression.  If it doesn't
fit, you'll need to do pg_dumpall --globals-only (mainly to capture
your role definitions) and then pg_dump each database into a separate
compressed file.

regards, tom lane




Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Adrian Klaver

On 7/5/25 12:19, Pierre Fortin wrote:

On Sat, 05 Jul 2025 14:30:20 -0400 Tom Lane wrote:

Forgive my ignorance; always trying to learn more... :)


p...@pfortin.com writes:

On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:

How did you measure above?



# du -sb /var/lib/pgsql/data
8227910662297   /var/lib/pgsql/data


It's likely that there's a deal of bloat in that.  Even if there's not
much bloat, this number will include indexes and WAL data that don't
appear in pg_dump output.


Does this imply that on restore, I'll have to re-index everything?


The dump file includes CREATE INDEX commands and per:

https://www.postgresql.org/docs/current/sql-createindex.html

"Creating an index can interfere with regular operation of a database. 
Normally PostgreSQL locks the table to be indexed against writes and 
performs the entire index build with a single scan of the table. Other 
transactions can still read the table, but if they try to insert, 
update, or delete rows in the table they will block until the index 
build is finished. This could have a severe effect if the system is a 
live production database. Very large tables can take many hours to be 
indexed, and even for smaller tables, an index build can lock out 
writers for periods that are unacceptably long for a production system."


Which is why pg_restore:

https://www.postgresql.org/docs/current/app-pgrestore.html

has:

"-j number-of-jobs
--jobs=number-of-jobs

Run the most time-consuming steps of pg_restore — those that load 
data, create indexes, or create constraints — concurrently, using up to 
number-of-jobs concurrent sessions. This option can dramatically reduce 
the time to restore a large database to a server running on a 
multiprocessor machine. This option is ignored when emitting a script 
rather than connecting directly to a database server."






What was the pg_dump command?



Didn't try given:
$ df /mnt/db
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdh117T   13T  3.0T  82% /mnt/db


I'd say give it a try; be sure to use one of the pg_dump modes
that compress the data.


OK...  I failed to mention I have several databases in this cluster; so
digging into pg_dumpall, I see:
--binary-upgrade
 This option is for use by in-place upgrade utilities. Its use for
 other purposes is not recommended or supported. The behavior of the
 option may change in future releases without notice.

pg_upgrade has --link option; but I'm puzzled by this option in a
dumpall/restore process. My imagination wonders if this alludes to a way
to do something like:
  pg_dumpall --globals-only --roles-only --schema-only ...
Would restoring this be a way to update only the control structures? Big
assumption that the actual data remains untouched...

Inquiring mind...  :)

Back to my upgrade issue...
All my DBs are static (only queries once loaded). Assuming the dumpall
file fits on one of my drives:
  pg_dumpall -f /PG.backup -v
appears to be all I need? pg_dump has compression by default; but I don't
see compression with dumpall other than for TOAST.

Thanks, You guys are awesome!
  

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Adrian Klaver

On 7/5/25 11:24, p...@pfortin.com wrote:

On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:




Didn't try given:
$ df /mnt/db
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdh117T   13T  3.0T  82% /mnt/db


You said you have ~70TB of free space, so where is the other  ~63TB?



I suppose I could dump each of the 1408 objects to various available
drives; but given my previous experience with PG13 to PG15 using --link
which took seconds; I'm hoping to avoid wasting time (at my age, hours
matter).

Cheers,
Pierre




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Adrian Klaver

On 7/5/25 12:19, Pierre Fortin wrote:

On Sat, 05 Jul 2025 14:30:20 -0400 Tom Lane wrote:




I'd say give it a try; be sure to use one of the pg_dump modes
that compress the data.


OK...  I failed to mention I have several databases in this cluster; so
digging into pg_dumpall, I see:
--binary-upgrade
 This option is for use by in-place upgrade utilities. Its use for
 other purposes is not recommended or supported. The behavior of the
 option may change in future releases without notice.

pg_upgrade has --link option; but I'm puzzled by this option in a
dumpall/restore process. My imagination wonders if this alludes to a way
to do something like:
  pg_dumpall --globals-only --roles-only --schema-only ...
Would restoring this be a way to update only the control structures? Big
assumption that the actual data remains untouched...

Inquiring mind...  :)

Back to my upgrade issue...
All my DBs are static (only queries once loaded). Assuming the dumpall
file fits on one of my drives:
  pg_dumpall -f /PG.backup -v


If you really want to use pg_dumpall and get compression then something 
like:


pg_dumpall -U postgres  | gzip  >  pg_backup.gz

Though this will take some time and really is probably better handled using:

pg_dumpall -U postgres -g > pg_globals.sql

and then:

pg_dump -d  -U -Fc -f .out

for each database. This will use compression by default.

Neither of these options will be as quick as doing pg_upgrade with 
--link. Though at this point you are boxed in by not being able to run 
multiple Postgres versions on one machine.




appears to be all I need? pg_dump has compression by default; but I don't
see compression with dumpall other than for TOAST.

Thanks, You guys are awesome!
  

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_upgrade: can I use same binary for old & new?

2025-07-05 Thread Pierre Fortin
On Sat, 5 Jul 2025 12:58:10 -0700 Adrian Klaver wrote:

>On 7/5/25 11:24, p...@pfortin.com wrote:
>> On Sat, 5 Jul 2025 11:11:32 -0700 Adrian Klaver wrote:
>>   
>
>> Didn't try given:
>> $ df /mnt/db
>> Filesystem  Size  Used Avail Use% Mounted on
>> /dev/sdh117T   13T  3.0T  82% /mnt/db  
>
>You said you have ~70TB of free space, so where is the other  ~63TB?

I never said "free space" with ~70TB; that's the total space on about 8
drives :)
The biggest free space I have is 7.6TB which is less than the 8TB DB;
but thanks to the responses, I should be able to make this work...

Also, I appreciate the clarification re CREATE INDEX (Doh!) and --jobs

Best,
Pierre