CurtHagenlocher opened a new issue, #3478: URL: https://github.com/apache/arrow-adbc/issues/3478
### What happened? Snowflake ingestion creates a stage with the fixed name `@ADBC$BIND`. It then uploads data in chunks with increasing names 1.parquet, 2.parquet, etc. This stage name is scoped to the database and schema but is otherwise global. If there are two concurrent processes ingesting to tables in the same schema, these will collide. One solution could be to avoid clearing the stage at the beginning when it already exists and then generating a unique identifier and uploading the files as UNIQUE_1.parquet, UNIQUE_2.parquet, etc. The `COPY INTO` statement would then need to be updated with `PATTERN = 'UNIQUE.*.parquet'` in order to match just those uploaded files. ### Stack Trace _No response_ ### How can we reproduce the bug? _No response_ ### Environment/Setup _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
