heman026 commented on issue #1759:
URL: 
https://github.com/apache/iceberg-python/issues/1759#issuecomment-2700632648

   > can you share more on what code you ran? specifically what is `table`, 
`pyarrow_table`, and `join_cols`
   
   ```
   iceberg_table = 'test.table1'
   table = catalog.load_table(iceberg_table)
   data= table.scan(row_filter=E.EqualTo('id', 1)).to_arrow()
   ```
   
   icerbeg table is fitlered and set to data pyarrow table). I am doing some 
computation on data and modify one of the column values. I update the columns 
values to the original iceberg table using upsert.
   
   `table.upsert(data,join_cols=['id','col1','col2,'col3'])`
   
   This is a sample code.  The iceberg table which I am testing with has 144 
million rows with 14 columns. And the filtered data has 500 rows with the same 
number of columns.  I have 4 primary key columns which I have provided in 
join_cols argument of upsert method.
   
   If the filtered data has around 400 rows, upsert method is working fine, if 
it has around 500 data, it is giving the above exception.
   
   Let me know if you need more details


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to