jkolash commented on issue #10277:
URL: https://github.com/apache/iceberg/issues/10277#issuecomment-2103344935

   In this case it is data written by snowflake
   it looks like it is a timestamp vs an auto increment.
   ```
   test_snowflake_table
   test_snowflake_table/data
   test_snowflake_table/data/snow_CYr21sbt9Ps_ALiJR-PqzBc_0_2_002.parquet
   test_snowflake_table/metadata
   test_snowflake_table/metadata/version-hint.text
   test_snowflake_table/metadata/v1715003877288000000.metadata.json
   test_snowflake_table/metadata/1715003877288000000-_0LSkJly75ls9-Mfg7ymhA.avro
   test_snowflake_table/metadata/snap-1715003877288000000.avro
   ```
   
   We aren't particularly interested in writing data to snowflake. But are 
interested in using the hadoop catalog to read data after it has landed on s3. 
Our goal is to be able to simply have snowflake write the data to s3 without 
needing to connect to the snowflake catalog. and just use s3 after data has 
been delivered so we don't have to "know" it is from snowflake.
   
   I've tested that I can query via spark 3.4  once I switch from int32 to 
long64
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to