nbenn opened a new issue, #1142:
URL: https://github.com/apache/arrow-adbc/issues/1142
Going by the docs I was expecting that passing `depth = 4L` I get back
schema info up to columns.
```r
library(adbcdrivermanager)
db <- adbc_database_init(adbcsqlite::adbcsqlite(), uri = ":me
alamb merged PR #94:
URL: https://github.com/apache/arrow-testing/pull/94
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@arrow.apach
alamb commented on PR #94:
URL: https://github.com/apache/arrow-testing/pull/94#issuecomment-1742042330
Thank you @sarutak
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
sarutak opened a new pull request, #95:
URL: https://github.com/apache/arrow-testing/pull/95
This PR proposes to add xz, zstd, bzip2 and snappy variant of
`alltypes_plain.avro`.
This change is necessary for [this
PR](https://github.com/apache/arrow-datafusion/pull/7718).
The conte
alamb merged PR #95:
URL: https://github.com/apache/arrow-testing/pull/95
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@arrow.apach
tschaub opened a new issue, #37968:
URL: https://github.com/apache/arrow/issues/37968
### Describe the bug, including details regarding any error messages,
version, and platform.
I'm running into an issue using a record reader to read Parquet data from
https://github.com/OvertureMaps
amoeba opened a new issue, #37969:
URL: https://github.com/apache/arrow/issues/37969
### Describe the bug, including details regarding any error messages,
version, and platform.
Writing to a closed writer causes a segfault rather than an error. I ran
into this while testing something
kou opened a new issue, #37971:
URL: https://github.com/apache/arrow/issues/37971
### Describe the enhancement requested
https://github.com/apache/arrow/actions/caches
> java-nightly-6371112382
> 8.6 GB cached hours ago
We can use 10 GB in apache/arrow for cache. If t
matquant14 opened a new issue, #1143:
URL: https://github.com/apache/arrow-adbc/issues/1143
I'm starting to explore the adbc snowflake driver for python. Is there a
way for the adbc cursor to return the Snowflake query id, like the cursor from
the snowflake python connector does, after exe