Re: Table copy

2025-02-05 Thread Andy Hartman
[6992] ERROR:  unexpected EOF on client connection with an open transaction
2025-02-05 12:19:44.919 EST [6992] CONTEXT:  COPY sqlt_data_1_2022_03, line
24431524, column dataintegrity
2025-02-05 12:19:44.919 EST [6992] STATEMENT:  COPY sqlt_data_1_2022_03
(tagid, intvalue, floatvalue, stringvalue, datevalue, dataintegrity,
t_stamp) FROM STDIN (FORMAT BINARY)
2025-02-05 12:19:44.919 EST [6992] FATAL:  terminating connection because
protocol synchronization was lost

On Wed, Feb 5, 2025 at 11:15 AM Adrian Klaver 
wrote:

> On 2/5/25 05:40, Andy Hartman wrote:
> > mssql - 2016
> > PG - 16.1
> > latest release for Simplysql
> Latest is a relative term and requires anyone in the future coming
> across this thread to work out what you are talking about. For that
> person the current version is 2.1.0.
>
> > Invoke-SqlBulkCopy -SourceConnectionName "src" -SourceTable
> > "sqlt_data_1_2022_03" -DestinationConnectionName "dst" -DestinationTable
> > "sqlt_data_1_2022_03" -BatchSize 1 -Notify
> >
> > 2.4 billion records  -- I have down other tables of same size no problems
>
> Same version of SimplySql?
>
> Same source and destination databases?
>
> Did you look at the Postgres and system logs to see if there was
> relevant information?
>
>
> >
> > mssql OS Windows Server 2019
> > PG OS Windows Server 2022
> >
> > Table:
> > image.png
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Lookup tables

2025-02-05 Thread Thiemo Kellner
04.02.2025 18:12:02 David G. Johnston :

> On Tue, Feb 4, 2025 at 9:31 AM Michał Kłeczek  wrote:
> 
> Well, we were talking about lookup tables and not entity modelling...

I am under the impression that a lookup table IS an entity. You find them in 
star and snowflake models alike.

> 
>> 
>> Having surrogate keys makes your relational database more like a 
>> network/object oriented database where rows don’t represent facts but rather 
>> some entities that have identity independent from their attributes.
> 

The presence or implementation of surrogate keys do not define in the least the 
type of database. It sole purpose is to surrogate the (speaking) business key 
such that updates on that key (think of typo) does not end up in an update 
orgy. Ok, maybe to simplify matters if you business key is made of more than 
one attribute/column. IMHO it is very good practice to still build a unique key 
on the business key and place a not-null-constraint on all its attributes.
> 
> 
> My identity is separate from any single value of my attributes.  Basically 
> any single thing about me can be changed but the coherent existence of my 
> "self" remains the same.

I would not go that transcendently far, but my attributes change but it is 
still me, even though my age increases over time as do my good looks. ;-)

> 
> Frankly, the restaurant example the "Owner" of the business should probably 
> be considered part of its primary key - they don't announce "under new 
> ownership/management" just for fun - the new owner wants to keep the brand 
> recognition but discard historical opinions that are likely no longer true.


I'd prefer the term business key instead of primary key here, as, if you choose 
to use a surrogate key, that one becomes the PK while the BK is a UQ. ;-) 
Sorry, I got carried away. Having said that, I would leave the decision of 
taking the owner into the BK to the project. E.g. if you want to have the 
information of unbroken existence of a restaurant at a certain place, I dare 
say, it cannot be part of the BK. One could even argue that not even the name 
is part of the BK but only the geolocation (addresses can change too).


Re: Table copy

2025-02-05 Thread Andy Hartman
nothing in log from mssql side and no  anti-virus

On Wed, Feb 5, 2025 at 2:06 PM Adrian Klaver 
wrote:

>
>
> On 2/5/25 9:46 AM, Andy Hartman wrote:
> > [6992] ERROR:  unexpected EOF on client connection with an open
> transaction
> > 2025-02-05 12:19:44.919 EST [6992] CONTEXT:  COPY sqlt_data_1_2022_03,
> > line 24431524, column dataintegrity
> > 2025-02-05 12:19:44.919 EST [6992] STATEMENT:  COPY sqlt_data_1_2022_03
> > (tagid, intvalue, floatvalue, stringvalue, datevalue, dataintegrity,
> > t_stamp) FROM STDIN (FORMAT BINARY)
> > 2025-02-05 12:19:44.919 EST [6992] FATAL:  terminating connection
> > because protocol synchronization was lost
>
> You need to look at the other end of the connection also, in other words
> what is happening on the MS SQL Server 2016/Windows Server 2019 side?
>
> Also is there anti-virus software running on either end?
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Table copy

2025-02-05 Thread Andy Hartman
I also reduced batch size to 5000 on the last run  .. I like using this
SImplySql because it's a script I can launch, so that's why I chose that
solution...  I'm using it to load a History Env.

THanks again.

On Wed, Feb 5, 2025 at 4:05 PM Andy Hartman  wrote:

> nothing in log from mssql side and no  anti-virus
>
> On Wed, Feb 5, 2025 at 2:06 PM Adrian Klaver 
> wrote:
>
>>
>>
>> On 2/5/25 9:46 AM, Andy Hartman wrote:
>> > [6992] ERROR:  unexpected EOF on client connection with an open
>> transaction
>> > 2025-02-05 12:19:44.919 EST [6992] CONTEXT:  COPY sqlt_data_1_2022_03,
>> > line 24431524, column dataintegrity
>> > 2025-02-05 12:19:44.919 EST [6992] STATEMENT:  COPY sqlt_data_1_2022_03
>> > (tagid, intvalue, floatvalue, stringvalue, datevalue, dataintegrity,
>> > t_stamp) FROM STDIN (FORMAT BINARY)
>> > 2025-02-05 12:19:44.919 EST [6992] FATAL:  terminating connection
>> > because protocol synchronization was lost
>>
>> You need to look at the other end of the connection also, in other words
>> what is happening on the MS SQL Server 2016/Windows Server 2019 side?
>>
>> Also is there anti-virus software running on either end?
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>


How to get a notification

2025-02-05 Thread Igor Korot
Hi, ALL,
In my code I'm running following:

queries.push_back( L"CREATE FUNCTION
__watch_schema_changes() RETURNS event_trigger LANGUAGE plpgsql AS $$
BEGIN NOTIFY tg_tag; END; $$;" );
queries.push_back( L"CREATE EVENT TRIGGER
schema_change_notify ON ddl_command_end WHEN TAG IN(\'CREATE TABLE\',
\'ALTER TABLE\', \'DROP TABLE\', \'CREATE INDEX\', \'DROP INDEX\')
EXECUTE PROCEDURE __watch_schema_changes();" );

My questions are:
1 Is there a better way to get notification about CREATE/ALTER/DROP TABLE?
2. How do I receive notification abut the event with the object name?

Thank you.




Re: Lookup tables

2025-02-05 Thread Michał Kłeczek



> On 5 Feb 2025, at 21:33, Thiemo Kellner  wrote:
> 
> 
> El 05-02-25 a las 13:55, Michał Kłeczek escribió:
>>> A) Your release changed the sementics of the record 3. It's meaning 
>>> changed. I cannot recommend doing that.
>> That’s what using natural keys and FK’s restricting their changes guarantee: 
>> no (accidental) changes to meaning of data.
>> Even with cascading updates you still have transactional semantics (ie. the 
>> user selects what’s on the screen or gets an error).
> Sorry, that is utter nonsense. You cannot ever guarantee an update does not 
> mess up the semantics on the updated field, change the meaning. Y

But you can guarantee that if you change the value of the key after the user 
displays it - the user will get an error on submission (whereas with the 
surrogate key it would happily proceed without user noticing).

> ou would need a check constraint which in it turn needs to get set up where 
> one can mess up things.
>>> B) If you absolutely must change the semantic, put your application into 
>>> maintenance mode in which noone can select anything beforehand.
>> All this error prone hassle and downtime can be avoided with natural keys 
>> and guarantees that DBMS gives you.
> And I thought you would have denied the need of changing semantics above. And 
> no, changing your natural keys semantically ALWAYS requires downtime to make 
> sure you do not run into the race condition described above.

How so? The user is going to get FK violation - you do not need any downtime to 
make sure users don’t submit wrong values.

>>> If the maintenance would just correct the typo from GREE to GREEN, nothing 
>>> would happen. Yor customer still ordered the lavishly green E-Bike her hear 
>>> ever desired.
>> The question is: how do you _ensure_ that?
> Ensure, the update goes from GREE to GREEN? You cannot, simple as that. You 
> just can minimize the risk by testing, testing, testing.

You can also simply disallow updates with FK constraint eliminating risk.

> But that holds equally true for the business key of a surrogate key table as 
> natural key table. That's why the surrogate key is such an elegant construct. 
> You can change business key of the record with id 3 from GREE to GREEN, VERT, 
> GRÜN, VERDE or ASéLDHK()*NSLDFHP)(*Z . It keeps its meaning of the perception 
> of the human eye of electromagnetic waves of the wavelength roughly between 
> 495-570 nm (according to Wikipedia).

And why do you think unconstrained updating of business key is a good thing?
You must implement rules governing what can and what cannot be changed 
*somewhere* - not doing it in the database means you have to do it in 
applications.

Anyway - let’s agree to disagree :)

—
Michal



Re: Table copy

2025-02-05 Thread Ron Johnson
Could there have been a network hiccup?  Or some sort of timeout?

If I needed to transfer 360GB of data, I'd probably do something old school
like:

1. write a PowerShell script to export a set of rows into a csv file, 7zip
compress it, then rsync or scp it to the target.
2. Write a bash script to decompress it then load the data into the PG
table.
3. Repeat (1) with the next set of data, and (2) until complete.  Start the
second (1) while the first (2) is running.

That's how I migrated 12GB of Oracle data to PG (except of course bash, not
PowerShell).

On Wed, Feb 5, 2025 at 4:05 PM Andy Hartman  wrote:

> nothing in log from mssql side and no  anti-virus
>
> On Wed, Feb 5, 2025 at 2:06 PM Adrian Klaver 
> wrote:
>
>>
>>
>> On 2/5/25 9:46 AM, Andy Hartman wrote:
>> > [6992] ERROR:  unexpected EOF on client connection with an open
>> transaction
>> > 2025-02-05 12:19:44.919 EST [6992] CONTEXT:  COPY sqlt_data_1_2022_03,
>> > line 24431524, column dataintegrity
>> > 2025-02-05 12:19:44.919 EST [6992] STATEMENT:  COPY sqlt_data_1_2022_03
>> > (tagid, intvalue, floatvalue, stringvalue, datevalue, dataintegrity,
>> > t_stamp) FROM STDIN (FORMAT BINARY)
>> > 2025-02-05 12:19:44.919 EST [6992] FATAL:  terminating connection
>> > because protocol synchronization was lost
>>
>> You need to look at the other end of the connection also, in other words
>> what is happening on the MS SQL Server 2016/Windows Server 2019 side?
>>
>> Also is there anti-virus software running on either end?
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Table copy

2025-02-05 Thread Adrian Klaver

On 2/5/25 13:09, Andy Hartman wrote:
I also reduced batch size to 5000 on the last run  .. I like using this 
SImplySql because it's a script I can launch, so that's why I chose that 
solution...  I'm using it to load a History Env.


You really need to complete your thoughts, remember we have no idea what 
you are seeing unless you tell us.


Reducing the batch size did what?




THanks again.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Table copy

2025-02-05 Thread Adrian Klaver

On 2/5/25 13:05, Andy Hartman wrote:

nothing in log from mssql side and no  anti-virus


How about the Windows Server 2019 system log?



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Commit Latency

2025-02-05 Thread Álvaro Herrera
Hello

On 2025-Feb-05, Ramakrishna m wrote:

> I have a system handling *300 TPS*, with resource usage *below 10%*.
> However, I’m noticing *commit latency of around 200ms* for *1% of
> transactions*, occasionally spiking to *1 second*. Since there is no
> significant *I/O pressure*, I’m trying to identify what else might be
> causing this in *PostgreSQL 16*.

max_connections=8000 doesn't sound great -- how many of those are
active, typically, and how many are idle-in-transaction?  And you have
autovacuum_naptime=5s ... which sounds rather dubious.  Either somebody
with great expertise configured this very carefully, or the settings are
somewhat randomly chosen with little or no expert oversight.  Do you
have monitoring on the amount of bloat on these database?  Maybe you
should consider connection pooling and limit the number that are active,
for starters.

Maybe have a look at whether pg_wait_sampling can give you more clues.
Some basic bloat monitoring is a prerequisite to any further performance
tweaking anyhow.

-- 
Álvaro HerreraBreisgau, Deutschland  —  https://www.EnterpriseDB.com/
Maybe there's lots of data loss but the records of data loss are also lost.
(Lincoln Yeoh)




Re: Table copy

2025-02-05 Thread Andy Hartman
This is going to be a monthly process not just a 1 time exercise.

On Wed, Feb 5, 2025 at 5:58 PM Andy Hartman  wrote:

> Reduce batch size still caused error as reported nothing in WIndows Server
> log...
>
>  The SimplySql is slick because just qry from src and load to dst ... Is
> there any way to somehow show more of the error in PS i tried the $error
> but nothing very descriptive
>
> On Wed, Feb 5, 2025 at 4:23 PM Adrian Klaver 
> wrote:
>
>> On 2/5/25 13:09, Andy Hartman wrote:
>> > I also reduced batch size to 5000 on the last run  .. I like using this
>> > SImplySql because it's a script I can launch, so that's why I chose
>> that
>> > solution...  I'm using it to load a History Env.
>>
>> You really need to complete your thoughts, remember we have no idea what
>> you are seeing unless you tell us.
>>
>> Reducing the batch size did what?
>>
>>
>> >
>> > THanks again.
>> >
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>


Re: Table copy

2025-02-05 Thread Andy Hartman
Reduce batch size still caused error as reported nothing in WIndows Server
log...

 The SimplySql is slick because just qry from src and load to dst ... Is
there any way to somehow show more of the error in PS i tried the $error
but nothing very descriptive

On Wed, Feb 5, 2025 at 4:23 PM Adrian Klaver 
wrote:

> On 2/5/25 13:09, Andy Hartman wrote:
> > I also reduced batch size to 5000 on the last run  .. I like using this
> > SImplySql because it's a script I can launch, so that's why I chose that
> > solution...  I'm using it to load a History Env.
>
> You really need to complete your thoughts, remember we have no idea what
> you are seeing unless you tell us.
>
> Reducing the batch size did what?
>
>
> >
> > THanks again.
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Lookup tables

2025-02-05 Thread Thiemo Kellner

El 04-02-25 a las 18:08, Michał Kłeczek escribió:

Reality tends to become so ambiguous as to not be
reflectable (two entirely different restaurants eventually,
within the flow of time, carry the very same name).

A primary key is very likely not the proper place to reflect
arbitrary business logic (is it the same restaurant or not ?
what if two restaurants have the same name at the same time

These are of course problems ( and beyond the scope of my contrived example ).

The point is though, that having surrogate PK not only does not solve these 
issues but makes them worse by kicking the can down the road and allowing for 
inconsistencies.
Only if you do not see the primary key as the main immutable value 
identifying an object, entity, you name it. Having said that, it is very 
questionable that a natural key (names to name one) can be a suitable 
primary key (think of typo).





Re: Lookup tables

2025-02-05 Thread Michał Kłeczek



> On 5 Feb 2025, at 19:07, Thiemo Kellner  wrote:
> 
> El 04-02-25 a las 18:08, Michał Kłeczek escribió:
>>> Reality tends to become so ambiguous as to not be
>>> reflectable (two entirely different restaurants eventually,
>>> within the flow of time, carry the very same name).
>>> 
>>> A primary key is very likely not the proper place to reflect
>>> arbitrary business logic (is it the same restaurant or not ?
>>> what if two restaurants have the same name at the same time
>> These are of course problems ( and beyond the scope of my contrived example 
>> ).
>> 
>> The point is though, that having surrogate PK not only does not solve these 
>> issues but makes them worse by kicking the can down the road and allowing 
>> for inconsistencies.
> Only if you do not see the primary key as the main immutable value 
> identifying an object, entity, you name it.

Surrogate key cannot identify any (real) object by definition :)
What object is identified by PK value 42 in “restaurants” table?

> Having said that, it is very questionable that a natural key (names to name 
> one) can be a suitable primary key (think of typo).

Typos are indeed a problem but adding surrogate key does not solve it, I’m 
afraid.

—
Michal



Re: Lookup tables

2025-02-05 Thread Thiemo Kellner



El 05-02-25 a las 22:19, Michał Kłeczek escribió:

But you can guarantee that if you change the value of the key after the user 
displays it - the user will get an error on submission (whereas with the 
surrogate key it would happily proceed without user noticing).
As you very rightly say happily proceed, because that is, what you 
actually want. The user chose GREE meaning green, which is updated in 
the meantime to GREEN. All good. Your solution throws an error for nothing.

How so? The user is going to get FK violation - you do not need any downtime to 
make sure users don’t submit wrong values.
Sorry, I was not aware, your aim is to bother the customer with FK 
violation messages, if you can avoid it.

Ensure, the update goes from GREE to GREEN? You cannot, simple as that. You 
just can minimize the risk by testing, testing, testing.

You can also simply disallow updates with FK constraint eliminating risk.
This is nothing that is specific to surrogate or natural keys. If one 
disallow updates, one has to live with typos and everything.

But that holds equally true for the business key of a surrogate key table as 
natural key table. That's why the surrogate key is such an elegant construct. 
You can change business key of the record with id 3 from GREE to GREEN, VERT, 
GRÜN, VERDE or ASéLDHK()*NSLDFHP)(*Z . It keeps its meaning of the perception 
of the human eye of electromagnetic waves of the wavelength roughly between 
495-570 nm (according to Wikipedia).

And why do you think unconstrained updating of business key is a good thing?
I am not sure what is your take on "unconstrained". As already 
mentioned, testing is the only constraint you can set up. And I am 
neither sure where your problem is with updating unless you have an 
update orgy because your ref-constraint goes on natural keys.

You must implement rules governing what can and what cannot be changed 
*somewhere* - not doing it in the database means you have to do it in 
applications.
That sounds adventurous. Maybe I am not seeing what you mean, but is 
sound to me that you build a shadow database where you map your GREE to 
GREEN in the application layer so the application can display the 
correct value in the GUI, but no one has to update the core database. 
And if there is a typo in the shadow data base you build another mapping 
database on top of it?

Anyway - let’s agree to disagree :)

Agreed :-)




Using PgAgent with SQL commands only?

2025-02-05 Thread Csányi Pál
Hello,

I am using Debian 12 operating system and
have installed on it

postgresql 15.10 (Debian 15.10-0+deb12u1)

and

pgagent -V
PostgreSQL Scheduling Agent
Version: 4.2.2

booth installed with apt.

PgAgent is running as service:
systemctl status pgagent
● pgagent.service - pgAgent for PostgreSQL
 Loaded: loaded (/etc/systemd/system/pgagent.service; enabled;
preset: enabled)
 Active: active (running) since Sun 2025-02-02 08:27:43 CET; 3 days ago
Process: 92063 ExecStart=/usr/bin/pgagent -s ${LOGFILE} -l
${LOGLEVEL} host=${DBHOST} dbname=${DBNAME} user=${DBUSER}
port=${DBPORT} (code=exited, status=0/SUCCESS)
   Main PID: 92064 (pgagent)
  Tasks: 1 (limit: 6999)
 Memory: 4.4M
CPU: 10.913s
 CGroup: /system.slice/pgagent.service
 └─92064 /usr/bin/pgagent -s /var/log/pgagent/pgagent.log
-l 1 host=localhost dbname=postgres user=pgagent port=5432

Because I can't install and run correct PgAdmin 4 on this system
I am trying to use PgAgent with SQL commands, so far without any success.

I did the following to create a job, jobstep and schedule for that job:
(This is just a test for me to see whether I can use PgAgent with SQL commands.
This example job should dump my database every five minutes.)

For this purpose these are my SQL commands which I did run sofar:

sudo su - postgres
psql

INSERT INTO pgagent.pga_job (jobjclid, jobname, jobdesc, jobhostagent,
jobenabled, jobcreated, jobchanged, jobagentid, jobnextrun,
joblastrun)
VALUES (3, 'otpercenkent_menti_vagyonunk_kezelese_t', 'Ötpercenként
menti vagyonunk_kezelese adattelepet.', '', TRUE, DEFAULT, DEFAULT,
NULL, NULL, NULL);

INSERT INTO pgagent.pga_jobstep (jstjobid, jstname, jstdesc,
jstenabled, jstkind, jstcode, jstconnstr, jstdbname, jstonerror,
jscnextrun)
VALUES ((SELECT jobid FROM pgagent.pga_job WHERE jobname =
'otpercenkent_menti_vagyonunk_kezelese_t'),
'menti_vagyonunk_kezelese_t', 'Menti vagyonunk_kezelese adattelepet.',
TRUE,
 'b', 
'/home/pali/Dokumentumok/AdatTelepeim/PgAgent/PgAgent_Hasznalata/pgagent_pg_dump_vagyonunk_kezelese.sh',DEFAULT
, '', DEFAULT, NULL);

INSERT INTO pgagent.pga_schedule (jscjobid, jscname, jscdesc,
jscenabled, jscstart, jscend, jscminutes, jschours, jscweekdays,
jscmonthdays, jscmonths)
VALUES ((SELECT jobid FROM pgagent.pga_job WHERE jobname =
'otpercenkent_menti_vagyonunk_kezelese_t'),'menti_vagyonunk_kezelese_t',
'Menti vagyonunk_kezelese adattelepet.', TRUE, now(), NULL,
-- jscminutes [60]
'{f,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f,t,f,f,f,f}',
-- jschours [24]
'{t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t}',
-- jscweekdays [7]
'{t,t,t,t,t,t,t}',
-- jscmonthdays [32]
'{t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t,t}',
-- jscmonths [12]
'{t,t,t,t,t,t,t,t,t,t,t,t}');

So this way I have created a pga_job, a pga_jobstep and a pga_schedule
but this job won't run every five minutes.

What am I missing here?

Any suggestions will be appreciated!

-- 
Best, Paul Chany




Re: Lookup tables

2025-02-05 Thread Thiemo Kellner



El 05-02-25 a las 13:55, Michał Kłeczek escribió:

A) Your release changed the sementics of the record 3. It's meaning changed. I 
cannot recommend doing that.

That’s what using natural keys and FK’s restricting their changes guarantee: no 
(accidental) changes to meaning of data.
Even with cascading updates you still have transactional semantics (ie. the 
user selects what’s on the screen or gets an error).
Sorry, that is utter nonsense. You cannot ever guarantee an update does 
not mess up the semantics on the updated field, change the meaning. You 
would need a check constraint which in it turn needs to get set up where 
one can mess up things.

B) If you absolutely must change the semantic, put your application into 
maintenance mode in which noone can select anything beforehand.

All this error prone hassle and downtime can be avoided with natural keys and 
guarantees that DBMS gives you.
And I thought you would have denied the need of changing semantics 
above. And no, changing your natural keys semantically ALWAYS requires 
downtime to make sure you do not run into the race condition described 
above.

If the maintenance would just correct the typo from GREE to GREEN, nothing 
would happen. Yor customer still ordered the lavishly green E-Bike her hear 
ever desired.

The question is: how do you _ensure_ that?
Ensure, the update goes from GREE to GREEN? You cannot, simple as that. 
You just can minimize the risk by testing, testing, testing. But that 
holds equally true for the business key of a surrogate key table as 
natural key table. That's why the surrogate key is such an elegant 
construct. You can change business key of the record with id 3 from GREE 
to GREEN, VERT, GRÜN, VERDE or ASéLDHK()*NSLDFHP)(*Z . It keeps its 
meaning of the perception of the human eye of electromagnetic waves of 
the wavelength roughly between 495-570 nm (according to Wikipedia).





Re: Table copy

2025-02-05 Thread Adrian Klaver

On 2/5/25 05:40, Andy Hartman wrote:

mssql - 2016
PG - 16.1
latest release for Simplysql
Latest is a relative term and requires anyone in the future coming 
across this thread to work out what you are talking about. For that 
person the current version is 2.1.0.


Invoke-SqlBulkCopy -SourceConnectionName "src" -SourceTable 
"sqlt_data_1_2022_03" -DestinationConnectionName "dst" -DestinationTable 
"sqlt_data_1_2022_03" -BatchSize 1 -Notify


2.4 billion records  -- I have down other tables of same size no problems


Same version of SimplySql?

Same source and destination databases?

Did you look at the Postgres and system logs to see if there was 
relevant information?





mssql OS Windows Server 2019
PG OS Windows Server 2022

Table:
image.png




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Lookup tables

2025-02-05 Thread Thiemo Kellner



El 05-02-25 a las 19:13, Michał Kłeczek escribió:

Only if you do not see the primary key as the main immutable value identifying 
an object, entity, you name it.

Surrogate key cannot identify any (real) object by definition :)
What object is identified by PK value 42 in “restaurants” table?
What object is identified by a PK value "löasidfhaösliw" in a restaurant 
table? It is the context only giving it sense and not less sense then 42 
or "Pizza Hut". In fact on disk, you won't even find 42 or "Pizza Hut". 
On hard disks, e.g., it is the direction of a magnetic field of several 
locations on the disk. Btw, 42 seems to me a quite geeky name for a nerd 
restaurant.

Having said that, it is very questionable that a natural key (names to name 
one) can be a suitable primary key (think of typo).

Typos are indeed a problem but adding surrogate key does not solve it, I’m 
afraid.
In how far does it not solve it? Or maybe better asked. Is your problem 
that typos occur or is the problem the amount of hassle to fix it?






Re: Table copy

2025-02-05 Thread Adrian Klaver




On 2/5/25 9:46 AM, Andy Hartman wrote:

[6992] ERROR:  unexpected EOF on client connection with an open transaction
2025-02-05 12:19:44.919 EST [6992] CONTEXT:  COPY sqlt_data_1_2022_03, 
line 24431524, column dataintegrity
2025-02-05 12:19:44.919 EST [6992] STATEMENT:  COPY sqlt_data_1_2022_03 
(tagid, intvalue, floatvalue, stringvalue, datevalue, dataintegrity, 
t_stamp) FROM STDIN (FORMAT BINARY)
2025-02-05 12:19:44.919 EST [6992] FATAL:  terminating connection 
because protocol synchronization was lost


You need to look at the other end of the connection also, in other words 
what is happening on the MS SQL Server 2016/Windows Server 2019 side?


Also is there anti-virus software running on either end?



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Table copy

2025-02-05 Thread Adrian Klaver

On 2/5/25 14:58, Andy Hartman wrote:
Reduce batch size still caused error as reported nothing in WIndows 
Server log...


  The SimplySql is slick because just qry from src and load to dst ... 
Is there any way to somehow show more of the error in PS i tried the 
$error but nothing very descriptive


Slick, except for the part where it is not working.

I have no idea what is going on at this point.

Some questions:

1) I don't know where the below is coming from?:

2025-02-04 14:42:18.265 EST [4796] LOG:  could not send data to client: 
An established connection was aborted by the software in your host machine.


I have searched the Postgres and SimpleSql source and I cannot find it 
in either.


Is the error message straight from the log or was it translated?

Where did you install Postgres from?

2) What are the exact versions for SimpleSql and the MS SQL Server and 
Postgres drivers you are using?


3) What is the network setup between the source and destination databases?

4) What if you move an overall smaller quantity of data over?






On Wed, Feb 5, 2025 at 4:23 PM Adrian Klaver > wrote:


On 2/5/25 13:09, Andy Hartman wrote:
 > I also reduced batch size to 5000 on the last run  .. I like
using this
 > SImplySql because it's a script I can launch, so that's why I
chose that
 > solution...  I'm using it to load a History Env.

You really need to complete your thoughts, remember we have no idea
what
you are seeing unless you tell us.

Reducing the batch size did what?


 >
 > THanks again.
 >

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Lookup tables

2025-02-05 Thread Michał Kłeczek



> On 4 Feb 2025, at 22:41, Thiemo Kellner  wrote:
> 
> 04.02.2025 18:31:09 Michał Kłeczek :
> 
>> 
>>> On 4 Feb 2025, at 18:27, Thiemo Kellner  wrote:
>>> 
>>>  Unless the lookup table is actually a check constraint one can use to 
>>> populate dropdown boxes in an interface.
>> 
>> That is even worse because it ceases being transactional and users might 
>> select something different than what they see on the screen.
> 
> I might see what you want to point out. E.g. the table is COLOURS. The rec 
> with id 1 is RED, the one with id 2 is BLUE, 3 is GREE and so on. Now you 
> load these values into the dropdown box that sports RED, BLUE, GREE and so 
> on. While someone selects GREE, there is a maintenance release changing GREE 
> to YELLOW. So when that someone sends the selection by id to the backend, not 
> GREE is selected but YELLOW.
> 
> A) Your release changed the sementics of the record 3. It's meaning changed. 
> I cannot recommend doing that.

That’s what using natural keys and FK’s restricting their changes guarantee: no 
(accidental) changes to meaning of data.
Even with cascading updates you still have transactional semantics (ie. the 
user selects what’s on the screen or gets an error).

> B) If you absolutely must change the semantic, put your application into 
> maintenance mode in which noone can select anything beforehand.

All this error prone hassle and downtime can be avoided with natural keys and 
guarantees that DBMS gives you.

> 
> If the maintenance would just correct the typo from GREE to GREEN, nothing 
> would happen. Yor customer still ordered the lavishly green E-Bike her hear 
> ever desired.

The question is: how do you _ensure_ that?

—
Michal



Re: Commit Latency

2025-02-05 Thread Tomas Vondra
I think the first thing you should do is verify the performance of the
storage, particularly how long it takes to do fsync. You didn't explain
what kind of storage or filesystem you're using, so hard to say.

It's not clear to me what "resource usage" means exactly, but the fsync
may be slow even when nothing else competes for I/O.

regards
Tomas

On 2/5/25 11:18, Ramakrishna m wrote:
> HI  Team,
> I have a system handling *300 TPS*, with resource usage *below 10%*.
> However, I’m noticing *commit latency of around 200ms* for *1% of
> transactions*, occasionally spiking to *1 second*. Since there is no
> significant *I/O pressure*, I’m trying to identify what else might be
> causing this in *PostgreSQL 16*.  
> 
> Below are the parameters, Version and OS details.
> 
> image.png
> 
> Regards,
> Ram.

-- 
Tomas Vondra





Re: Table copy

2025-02-05 Thread Andy Hartman
mssql - 2016
PG - 16.1
latest release for Simplysql
Invoke-SqlBulkCopy -SourceConnectionName "src" -SourceTable
"sqlt_data_1_2022_03" -DestinationConnectionName "dst" -DestinationTable
"sqlt_data_1_2022_03" -BatchSize 1 -Notify

2.4 billion records  -- I have down other tables of same size no problems

mssql OS Windows Server 2019
PG OS Windows Server 2022

Table:
[image: image.png]



On Tue, Feb 4, 2025 at 3:02 PM Adrian Klaver 
wrote:

>
>
> On 2/4/25 11:51 AM, Andy Hartman wrote:
> > I'm copying a large table from mssql to PG using SImplysql and in the
> > Log I see this message
> >
> > 2025-02-04 14:42:17.975 EST [4796] ERROR:  unexpected EOF on client
> > connection with an open transaction
>
> The above pretty much spells it out.
>
> Something messed with the connection from the client.
>
> You need to look at Postgres and system logs to see if you can track
> down what?
>
> FYI, more information would be helpful:
>
> 1) MS SQL version.
>
> 2) Postgres version.
>
> 3) SimplySQL version.
>
> 4) The complete copy command.
>
> 5) The amount of data being transferred.
>
> 6) The OS'es and their versions on both ends.
>
>
> > 2025-02-04 14:42:17.975 EST [4796] CONTEXT:  COPY sqlt_data_1_2022_03,
> > line 208274199, column tagid
> > 2025-02-04 14:42:17.975 EST [4796] STATEMENT:  COPY sqlt_data_1_2022_03
> > (tagid, intvalue, floatvalue, stringvalue, datevalue, dataintegrity,
> > t_stamp) FROM STDIN (FORMAT BINARY)
> > 2025-02-04 14:42:18.158 EST [4796] FATAL:  terminating connection
> > because protocol synchronization was lost
> > 2025-02-04 14:42:18.265 EST [4796] LOG:  could not send data to client:
> > An established connection was aborted by the software in your host
> machine.
> > it then dies with a data stream error.I have copied other tables on this
> > size with no problems.
> >
> >   THanks.
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Commit Latency

2025-02-05 Thread Laurenz Albe
On Wed, 2025-02-05 at 15:48 +0530, Ramakrishna m wrote:
> I’m noticing commit latency of around 200ms for 1% of transactions, 
> occasionally
> spiking to 1 second. Since there is no significant I/O pressure, I’m trying to
> identify what else might be causing this in PostgreSQL 16.

https://www.cybertec-postgresql.com/en/why-do-i-have-a-slow-commit-in-postgresql/

Yours,
Laurenz Albe