[GitHub] [arrow-adbc] nbenn opened a new issue, #1142: r/adbcdrivermanager: behavior of `depth` argument in `adbc_connection_get_objects()`

2023-10-01 Thread via GitHub
nbenn opened a new issue, #1142: URL: https://github.com/apache/arrow-adbc/issues/1142 Going by the docs I was expecting that passing `depth = 4L` I get back schema info up to columns. ```r library(adbcdrivermanager) db <- adbc_database_init(adbcsqlite::adbcsqlite(), uri = ":me

[GitHub] [arrow-testing] alamb merged pull request #94: Update nested_redords.avro to support nullable records

2023-10-01 Thread via GitHub
alamb merged PR #94: URL: https://github.com/apache/arrow-testing/pull/94 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@arrow.apach

[GitHub] [arrow-testing] alamb commented on pull request #94: Update nested_redords.avro to support nullable records

2023-10-01 Thread via GitHub
alamb commented on PR #94: URL: https://github.com/apache/arrow-testing/pull/94#issuecomment-1742042330 Thank you @sarutak -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

[GitHub] [arrow-testing] sarutak opened a new pull request, #95: Add xz,zstd,bzip2,snappy variant of alltypes_plain.avro

2023-10-01 Thread via GitHub
sarutak opened a new pull request, #95: URL: https://github.com/apache/arrow-testing/pull/95 This PR proposes to add xz, zstd, bzip2 and snappy variant of `alltypes_plain.avro`. This change is necessary for [this PR](https://github.com/apache/arrow-datafusion/pull/7718). The conte

[GitHub] [arrow-testing] alamb merged pull request #95: Add xz,zstd,bzip2,snappy variant of alltypes_plain.avro

2023-10-01 Thread via GitHub
alamb merged PR #95: URL: https://github.com/apache/arrow-testing/pull/95 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@arrow.apach

[GitHub] [arrow] tschaub opened a new issue, #37968: [Go][Parquet] Panic reading records from Overture Parquet file

2023-10-01 Thread via GitHub
tschaub opened a new issue, #37968: URL: https://github.com/apache/arrow/issues/37968 ### Describe the bug, including details regarding any error messages, version, and platform. I'm running into an issue using a record reader to read Parquet data from https://github.com/OvertureMaps

[GitHub] [arrow] amoeba opened a new issue, #37969: [R] segfault when writing to ParquetFileWriter after closing

2023-10-01 Thread via GitHub
amoeba opened a new issue, #37969: URL: https://github.com/apache/arrow/issues/37969 ### Describe the bug, including details regarding any error messages, version, and platform. Writing to a closed writer causes a segfault rather than an error. I ran into this while testing something

[GitHub] [arrow] kou opened a new issue, #37971: [CI][Java] java-nightly cache has 8.6 GB

2023-10-01 Thread via GitHub
kou opened a new issue, #37971: URL: https://github.com/apache/arrow/issues/37971 ### Describe the enhancement requested https://github.com/apache/arrow/actions/caches > java-nightly-6371112382 > 8.6 GB cached hours ago We can use 10 GB in apache/arrow for cache. If t

[GitHub] [arrow-adbc] matquant14 opened a new issue, #1143: Returning Snowflake query id

2023-10-01 Thread via GitHub
matquant14 opened a new issue, #1143: URL: https://github.com/apache/arrow-adbc/issues/1143 I'm starting to explore the adbc snowflake driver for python. Is there a way for the adbc cursor to return the Snowflake query id, like the cursor from the snowflake python connector does, after exe