Thanks Laurenz Albe, 1. I tried running Copy From command from PGAdmin. 2. I ran pg_stat_activity also in another tab [ PGAdmin ]. What I observed,1. In about 2 min, in the Dashboard of PGAdmin, the colour changed to Orange for that particular pid.2. After few seconds, the colour again changed to Red. 3. The stat column in both Dashboard and pg_stat_activity.state shows 'active'.4. No messages relevant to this pid in postgres' log files5. CPU usage is normal, hovering around 36% overall, 15-18% for postgresql server, from the beginning and even after 3 min.6. We didn't run any other application in that machine. 7. Unique Index is there in table2 which will return only one row for that Select Count(*) query.8. No record is there in the target table when transfer started. This transfer is the first batch for that particular table.9. Once color turned into Red, I could cancel the query execuion in PGAdmin, which immediately stops the execution.10. I could not see any locks or blocking pids. Happiness Always BKR Sivaprakash On Wednesday 23 July, 2025 at 03:46:40 pm IST, Laurenz Albe <laurenz.a...@cybertec.at> wrote: On Wed, 2025-07-23 at 08:50 +0000, sivapostg...@yahoo.com wrote: > Tried in PostgreSQL 11.11 , PostgreSQL 15.2 in Windows 10
Both of these choices are unsavory. Don't use the unsupported v11, and use 15.13 with v15. > Here we try to transfer data from one database to another (remote) database. > > Tables do have records ranging from 85000 to 3600000 along with smaller sized > tables. > No issues while transferring smaller sized tables. > > I here take one particular table [table1] which has 85000 records. > The table got Primary Key, Foreign Key(s), Triggers. Trigger updates another > table [table2] > Table2 have 2 triggers, one to arrive a closing value and other to delete, if > the closing value is zero. > > 1. Transfer the data from source database to a csv file. 85000 records > transferred. No issues. > 2. Transfer the file to the remote location. No issues. > 3. Transfer the contents of the file to the table using Copy From command. - > Fails when try to transfer all the 85000 records at once. > > Copy from command is > > Copy public.table1 From 'E:\temp\file1.csv' (FORMAT CSV, DELIMITER ',', > HEADER TRUE) > > The above command succeeds, when > 1. The trigger in Table1 is disabled with all other constraints on. > 2. The no. of rows is within 16000 or less, with Trigger enabled. We > haven't tried with higher no of rows. > > The above command goes on infinite loop, when > 1. We try to transfer all 85000 rows at once, with Trigger and other > constraints in table1 enabled. > We waited for 1.5 hrs first time and 2.5 hrs second time before >cancelling the operation. > > I read in the documentation that the fastest way to transfer data is to use > Copy command. > And I couldn't find any limit in transferring data using that command. > One could easily transfer millions of rows using this command. There is no limit for the number of rows that get created by a single COPY. You should research why processing fails for higher row counts: - Are there any messages on the client or the server side? - Is the backend process on the server busy (consuming CPU) when processing hangs? - Do you see locks or other wait events in "pg_stat_activity"? > Here are the triggers. > > Trigger function, which is called from Table1 on After Insert, Update, Delete One thing you could try is a BEFORE trigger. That should work the same, unless there are foreign key constraints. Do you see high memory usage or paging for the backend process when the COPY hangs? > [...] > If (Select Count(*) > From table2 > WHERE companycode = company_code > AND branchcode = branch_code > AND locationfk = location_fk > AND barcode = variety_code ) > 0 Then > [...] That may well be slow, particularly without a matching index. A better way to write that would be IF EXISTS (SELECT 1 FROM table2 WHERE ...) because that can stop processing after the first match. It still needs an index for fast processing. Yours, Laurenz Albe