This is an automated email from the ASF dual-hosted git repository.
lidavidm pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/arrow-adbc.git
The following commit(s) were added to refs/heads/main by this push:
new fab11bb8a docs: generate driver status from README badges (#2890)
fab11bb8a is described below
commit fab11bb8a0eb46ac2e3ccd14985ff80a403f267b
Author: David Li <[email protected]>
AuthorDate: Fri Jul 18 09:40:11 2025 +0900
docs: generate driver status from README badges (#2890)
Fixes #2764.
---------
Co-authored-by: Bryce Mecum <[email protected]>
---
c/driver/bigquery/README.md | 9 +
c/driver/flightsql/README.md | 36 ++
c/driver/postgresql/README.md | 11 +-
c/driver/snowflake/README.md | 12 +
c/driver/sqlite/README.md | 9 +
{ruby => c/integration/duckdb}/README.md | 39 +--
csharp/src/Drivers/Apache/Hive2/README.md | 4 +
csharp/src/Drivers/Apache/Impala/README.md | 4 +
csharp/src/Drivers/Apache/Spark/README.md | 4 +
csharp/src/Drivers/BigQuery/readme.md | 7 +
csharp/src/Drivers/Databricks/readme.md | 4 +
csharp/src/Drivers/FlightSql/README.md | 28 ++
docs/source/conf.py | 2 +
docs/source/driver/duckdb.rst | 2 +-
docs/source/driver/flight_sql.rst | 52 +--
docs/source/driver/jdbc.rst | 18 +-
docs/source/driver/postgresql.rst | 57 +---
docs/source/driver/snowflake.rst | 38 +--
docs/source/driver/sqlite.rst | 41 +--
docs/source/driver/status.rst | 95 ++----
docs/source/ext/adbc_misc.py | 370 +++++++++++++++++++++
.../snowflake => java/driver/flight-sql}/README.md | 16 +-
{c/driver/snowflake => java/driver/jdbc}/README.md | 16 +-
.../adbc_driver_bigquery/__init__.py | 44 +--
ruby/README.md | 2 +
rust/driver/datafusion/README.md | 6 +
26 files changed, 596 insertions(+), 330 deletions(-)
diff --git a/c/driver/bigquery/README.md b/c/driver/bigquery/README.md
index 1bfd92cb3..e41090d8a 100644
--- a/c/driver/bigquery/README.md
+++ b/c/driver/bigquery/README.md
@@ -19,6 +19,15 @@
# ADBC BigQuery Driver
+
+
+
+
+[](https://anaconda.org/conda-forge/adbc-driver-bigquery)
+[](https://anaconda.org/conda-forge/libadbc-driver-bigquery)
+[](https://pypi.org/project/adbc-driver-bigquery/)
+[](https://community.r-multiverse.org/adbcbigquery/)
+
This driver provides an interface to
[BigQuery](https://cloud.google.com/bigquery) using ADBC.
diff --git a/c/driver/flightsql/README.md b/c/driver/flightsql/README.md
new file mode 100644
index 000000000..ca2d90629
--- /dev/null
+++ b/c/driver/flightsql/README.md
@@ -0,0 +1,36 @@
+<!---
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing,
+ software distributed under the License is distributed on an
+ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ KIND, either express or implied. See the License for the
+ specific language governing permissions and limitations
+ under the License.
+-->
+
+# ADBC Arrow Flight SQL Driver
+
+
+
+
+
+[](https://anaconda.org/conda-forge/adbc-driver-flightsql)
+[](https://anaconda.org/conda-forge/libadbc-driver-flightsql)
+[](https://pypi.org/project/adbc-driver-flightsql/)
+[](https://community.r-multiverse.org/adbcflightsql/)
+
+This driver provides an interface to databases supporting
+[Apache Arrow Flight SQL](https://arrow.apache.org/docs/format/FlightSql.html)
using ADBC.
+
+## Building
+
+See [CONTRIBUTING.md](../../../CONTRIBUTING.md) for details.
diff --git a/c/driver/postgresql/README.md b/c/driver/postgresql/README.md
index 402a237b7..b83ce2de7 100644
--- a/c/driver/postgresql/README.md
+++ b/c/driver/postgresql/README.md
@@ -19,8 +19,17 @@
# ADBC PostgreSQL Driver
+
+
+
+
+[](https://anaconda.org/conda-forge/adbc-driver-postgresql)
+[](https://anaconda.org/conda-forge/libadbc-driver-postgresql)
+[](https://cran.r-project.org/web/packages/adbcpostgresql/index.html)
+[](https://pypi.org/project/adbc-driver-postgresql/)
+
This implements an ADBC driver that wraps [libpq][libpq], the client
-library for PostgreSQL. This is still a work in progress.
+library for PostgreSQL.
This project owes credit to 0x0L's [pgeon][pgeon] for the overall
approach.
diff --git a/c/driver/snowflake/README.md b/c/driver/snowflake/README.md
index 75c67d3ee..180b4c0fc 100644
--- a/c/driver/snowflake/README.md
+++ b/c/driver/snowflake/README.md
@@ -19,6 +19,18 @@
# ADBC Snowflake Driver
+
+
+
+
+[](https://anaconda.org/conda-forge/adbc-driver-snowflake)
+[](https://anaconda.org/conda-forge/libadbc-driver-snowflake)
+[](https://crates.io/crates/adbc_snowflake)
+[](https://pkg.go.dev/github.com/apache/arrow-adbc/go/adbc/driver/snowflake)
+[](https://www.nuget.org/packages/Apache.Arrow.Adbc.Drivers.Interop.Snowflake)
+[](https://pypi.org/project/adbc-driver-snowflake/)
+[](https://community.r-multiverse.org/adbcsnowflake/)
+
This driver provides an interface to
[Snowflake](https://www.snowflake.com/) using ADBC.
diff --git a/c/driver/sqlite/README.md b/c/driver/sqlite/README.md
index 4137f24c4..1f1bdadd6 100644
--- a/c/driver/sqlite/README.md
+++ b/c/driver/sqlite/README.md
@@ -19,6 +19,15 @@
# ADBC SQLite Driver
+
+
+
+
+[](https://anaconda.org/conda-forge/adbc-driver-sqlite)
+[](https://anaconda.org/conda-forge/libadbc-driver-sqlite)
+[](https://cran.r-project.org/web/packages/adbcsqlite/index.html)
+[](https://pypi.org/project/adbc-driver-sqlite/)
+
This driver provides an interface to
[SQLite](https://sqlite.org/index.html) using ADBC.
diff --git a/ruby/README.md b/c/integration/duckdb/README.md
similarity index 53%
copy from ruby/README.md
copy to c/integration/duckdb/README.md
index 5a99d3c57..7621e0737 100644
--- a/ruby/README.md
+++ b/c/integration/duckdb/README.md
@@ -17,38 +17,13 @@
under the License.
-->
-# Red ADBC
+# ADBC DuckDB Integration Test
-Red ADBC is the Ruby bindings of ADBC GLib.
+
+
+
-## How to install
+[](https://anaconda.org/conda-forge/python-duckdb)
+[](https://pypi.org/project/duckdb/)
-If you want to install Red ADBC by Bundler, you can add the following
-to your `Gemfile`:
-
-```ruby
-plugin "rubygems-requirements-system"
-
-gem "red-adbc"
-```
-
-If you want to install Red ADBC by RubyGems, you can use the following
-command line:
-
-```console
-$ gem install rubygems-requirements-system
-$ gem install red-adbc
-```
-
-## How to use
-
-```ruby
-require "adbc"
-
-ADBC::Database.open(driver: "adbc_driver_sqlite",
- uri: ":memory:") do |database|
- database.connect do |connection|
- puts(connection.query("SELECT 1"))
- end
-end
-```
+This package provides an integration test between
[DuckDB](https://duckdb.org/) and ADBC.
diff --git a/csharp/src/Drivers/Apache/Hive2/README.md
b/csharp/src/Drivers/Apache/Hive2/README.md
index 42f974be1..cde81ecd6 100644
--- a/csharp/src/Drivers/Apache/Hive2/README.md
+++ b/csharp/src/Drivers/Apache/Hive2/README.md
@@ -19,6 +19,10 @@
# Hive Driver
+
+
+
+
## Database and Connection Properties
Properties should be passed in the call to `HiveServer2Driver.Open`,
diff --git a/csharp/src/Drivers/Apache/Impala/README.md
b/csharp/src/Drivers/Apache/Impala/README.md
index adceeb51e..6313c83b3 100644
--- a/csharp/src/Drivers/Apache/Impala/README.md
+++ b/csharp/src/Drivers/Apache/Impala/README.md
@@ -19,6 +19,10 @@
# Impala Driver
+
+
+
+
## Database and Connection Properties
Properties should be passed in the call to `ImpalaDriver.Open`,
diff --git a/csharp/src/Drivers/Apache/Spark/README.md
b/csharp/src/Drivers/Apache/Spark/README.md
index b0f42e58a..6c44c8523 100644
--- a/csharp/src/Drivers/Apache/Spark/README.md
+++ b/csharp/src/Drivers/Apache/Spark/README.md
@@ -19,6 +19,10 @@
# Spark Driver
+
+
+
+
## Database and Connection Properties
Properties should be passed in the call to `SparkDriver.Open`,
diff --git a/csharp/src/Drivers/BigQuery/readme.md
b/csharp/src/Drivers/BigQuery/readme.md
index d49018afa..13d2ec6cc 100644
--- a/csharp/src/Drivers/BigQuery/readme.md
+++ b/csharp/src/Drivers/BigQuery/readme.md
@@ -18,6 +18,13 @@
-->
# BigQuery
+
+
+
+
+
+[](https://www.nuget.org/packages/Apache.Arrow.Adbc.Drivers.BigQuery)
+
The BigQuery ADBC driver wraps a
[BigQueryClient](https://cloud.google.com/dotnet/docs/reference/Google.Cloud.BigQuery.V2/latest/Google.Cloud.BigQuery.V2.BigQueryClient)
object for working with [Google BigQuery](https://cloud.google.com/bigquery/)
data.
# Supported Features
diff --git a/csharp/src/Drivers/Databricks/readme.md
b/csharp/src/Drivers/Databricks/readme.md
index af6ecd901..d7fe4843a 100644
--- a/csharp/src/Drivers/Databricks/readme.md
+++ b/csharp/src/Drivers/Databricks/readme.md
@@ -19,6 +19,10 @@
# Databricks
+
+
+
+
The Databricks ADBC driver is built on top of the Spark ADBC driver and
inherits many of it's [properties](../Apache/Spark/readme.md).
The Databricks ADBC driver supports the following authentication methods:
diff --git a/csharp/src/Drivers/FlightSql/README.md
b/csharp/src/Drivers/FlightSql/README.md
new file mode 100644
index 000000000..9b4a95d0e
--- /dev/null
+++ b/csharp/src/Drivers/FlightSql/README.md
@@ -0,0 +1,28 @@
+<!--
+
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+-->
+
+# Arrow Flight SQL
+
+
+
+
+
+[](https://www.nuget.org/packages/Apache.Arrow.Adbc.Drivers.FlightSql)
+
+This driver provides an interface to databases supporting [Apache Arrow Flight
SQL](https://arrow.apache.org/docs/format/FlightSql.html) using ADBC.
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 0c7fb600f..b65e39dd9 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -40,6 +40,8 @@ version = release
exclude_patterns = []
extensions = [
+ # misc directives
+ "adbc_misc",
# recipe directive
"sphinx_recipe",
# generic directives to enable intersphinx for java
diff --git a/docs/source/driver/duckdb.rst b/docs/source/driver/duckdb.rst
index 94460eb53..870efb821 100644
--- a/docs/source/driver/duckdb.rst
+++ b/docs/source/driver/duckdb.rst
@@ -19,7 +19,7 @@
DuckDB Support
==============
-**Available for:** C/C++, GLib/Ruby, Go, Python, R
+.. adbc_driver_status:: ../../../c/integration/duckdb/README.md
`DuckDB`_ provides ADBC support since `version 0.8.0
<https://duckdb.org/2023/05/17/announcing-duckdb-080.html>`_.
diff --git a/docs/source/driver/flight_sql.rst
b/docs/source/driver/flight_sql.rst
index 1e30d77ee..11cb7b191 100644
--- a/docs/source/driver/flight_sql.rst
+++ b/docs/source/driver/flight_sql.rst
@@ -19,7 +19,7 @@
Flight SQL Driver
=================
-**Available for:** C/C++, GLib/Ruby, Go, Java, Python, R
+.. adbc_driver_status:: ../../../c/driver/flightsql/README.md
The Flight SQL Driver provides access to any database implementing a
:doc:`arrow:format/FlightSql` compatible endpoint.
@@ -27,55 +27,7 @@ The Flight SQL Driver provides access to any database
implementing a
Installation
============
-.. tab-set::
-
- .. tab-item:: C/C++
- :sync: cpp
-
- For conda-forge users:
-
- .. code-block:: shell
-
- mamba install libadbc-driver-flightsql
-
- .. tab-item:: Go
- :sync: go
-
- .. code-block:: shell
-
- go get github.com/apache/arrow-adbc/go/adbc
-
- .. tab-item:: Java
- :sync: java
-
- Add a dependency on ``org.apache.arrow.adbc:adbc-driver-flight-sql``.
-
- For Maven users:
-
- .. code-block:: xml
-
- <dependency>
- <groupId>org.apache.arrow.adbc</groupId>
- <artifactId>adbc-driver-flight-sql</artifactId>
- </dependency>
-
- .. tab-item:: Python
- :sync: python
-
- .. code-block:: shell
-
- # For conda-forge
- mamba install adbc-driver-flightsql
-
- # For pip
- pip install adbc_driver_flightsql
-
- .. tab-item:: R
- :sync: r
-
- .. code-block:: r
-
- install.packages("adbcflightsql", repos =
"https://community.r-multiverse.org")
+.. adbc_driver_installation:: ../../../c/driver/flightsql/README.md
Usage
=====
diff --git a/docs/source/driver/jdbc.rst b/docs/source/driver/jdbc.rst
index b8203a50b..63261db93 100644
--- a/docs/source/driver/jdbc.rst
+++ b/docs/source/driver/jdbc.rst
@@ -19,28 +19,14 @@
JDBC Adapter
============
-**Available for:** Java
+.. adbc_driver_status:: ../../../java/driver/jdbc/README.md
The JDBC Adapter provides access to any database with a JDBC driver.
Installation
============
-.. tab-set::
-
- .. tab-item:: Java
- :sync: java
-
- Add a dependency on ``org.apache.arrow.adbc:adbc-driver-jdbc``.
-
- For Maven users:
-
- .. code-block:: xml
-
- <dependency>
- <groupId>org.apache.arrow.adbc</groupId>
- <artifactId>adbc-driver-jdbc</artifactId>
- </dependency>
+.. adbc_driver_installation:: ../../../java/driver/jdbc/README.md
Usage
=====
diff --git a/docs/source/driver/postgresql.rst
b/docs/source/driver/postgresql.rst
index e79838df4..0d57c4de1 100644
--- a/docs/source/driver/postgresql.rst
+++ b/docs/source/driver/postgresql.rst
@@ -19,7 +19,7 @@
PostgreSQL Driver
=================
-**Available for:** C/C++, GLib/Ruby, Go, Python, R
+.. adbc_driver_status:: ../../../c/driver/postgresql/README.md
The PostgreSQL driver provides access to any database that supports
the PostgreSQL wire format. It wraps `libpq`_, the client library for
@@ -31,58 +31,17 @@ overall approach.
.. _libpq: https://www.postgresql.org/docs/current/libpq.html
.. _pgeon: https://github.com/0x0L/pgeon
-.. note:: The PostgreSQL driver is in beta.
- Performance/optimization and support for complex types and
- different ADBC features is still ongoing.
-
-.. note:: AWS Redshift supports a very old version of the PostgreSQL
- wire protocol and has a basic level of support in the ADBC
- PostgreSQL driver. Because Redshift does not support reading or
- writing COPY in PostgreSQL binary format, the optimizations that
- accelerate non-Redshift queries are not enabled when connecting
- to a Redshift database. This functionality is experimental.
+.. note:: This driver has experimental support for Amazon Redshift. As
+ Redshift does not support reading or writing COPY in PostgreSQL
+ binary format, however, the optimizations that accelerate queries
+ are not enabled when connecting to Redshift. There may also be
+ other differences in functionality; please file a bug report if
+ problems are encountered.
Installation
============
-.. tab-set::
-
- .. tab-item:: C/C++
- :sync: cpp
-
- For conda-forge users:
-
- .. code-block:: shell
-
- mamba install libadbc-driver-postgresql
-
- .. tab-item:: Go
- :sync: go
-
- Install the C/C++ package and use the Go driver manager.
- Requires CGO.
-
- .. code-block:: shell
-
- go get github.com/apache/arrow-adbc/go/adbc/drivermgr
-
- .. tab-item:: Python
- :sync: python
-
- .. code-block:: shell
-
- # For conda-forge
- mamba install adbc-driver-postgresql
-
- # For pip
- pip install adbc_driver_postgresql
-
- .. tab-item:: R
- :sync: r
-
- .. code-block:: r
-
- install.packages("adbcpostgresql")
+.. adbc_driver_installation:: ../../../c/driver/postgresql/README.md
Usage
=====
diff --git a/docs/source/driver/snowflake.rst b/docs/source/driver/snowflake.rst
index 38969e2ca..beb94ba4b 100644
--- a/docs/source/driver/snowflake.rst
+++ b/docs/source/driver/snowflake.rst
@@ -19,48 +19,14 @@
Snowflake Driver
================
-**Available for:** C/C++, GLib/Ruby, Go, Python, R
+.. adbc_driver_status:: ../../../c/driver/snowflake/README.md
The Snowflake Driver provides access to Snowflake Database Warehouses.
Installation
============
-.. tab-set::
-
- .. tab-item:: C/C++
- :sync: cpp
-
- For conda-forge users:
-
- .. code-block:: shell
-
- mamba install libadbc-driver-snowflake
-
- .. tab-item:: Go
- :sync: go
-
- .. code-block:: shell
-
- go get github.com/apache/arrow-adbc/go/adbc/driver/snowflake
-
- .. tab-item:: Python
- :sync: python
-
- .. code-block:: shell
-
- # For conda-forge
- mamba install adbc-driver-snowflake
-
- # For pip
- pip install adbc_driver_snowflake
-
- .. tab-item:: R
- :sync: r
-
- .. code-block:: shell
-
- install.packages("adbcsnowflake", repos =
"https://community.r-multiverse.org")
+.. adbc_driver_installation:: ../../../c/driver/snowflake/README.md
Usage
=====
diff --git a/docs/source/driver/sqlite.rst b/docs/source/driver/sqlite.rst
index fc6b41384..a1c113d3d 100644
--- a/docs/source/driver/sqlite.rst
+++ b/docs/source/driver/sqlite.rst
@@ -19,7 +19,7 @@
SQLite Driver
=============
-**Available for:** C/C++, GLib/Ruby, Go, Python, R
+.. adbc_driver_status:: ../../../c/driver/sqlite/README.md
The SQLite driver provides access to SQLite databases.
@@ -30,44 +30,7 @@ not received attention to optimization.
Installation
============
-.. tab-set::
-
- .. tab-item:: C/C++
- :sync: cpp
-
- For conda-forge users:
-
- .. code-block:: shell
-
- mamba install libadbc-driver-sqlite
-
- .. tab-item:: Go
- :sync: go
-
- Install the C/C++ package and use the Go driver manager.
- Requires CGO.
-
- .. code-block:: shell
-
- go get github.com/apache/arrow-adbc/go/adbc/drivermgr
-
- .. tab-item:: Python
- :sync: python
-
- .. code-block:: shell
-
- # For conda-forge
- mamba install adbc-driver-sqlite
-
- # For pip
- pip install adbc_driver_sqlite
-
- .. tab-item:: R
- :sync: r
-
- .. code-block:: r
-
- install.packages("adbcsqlite")
+.. adbc_driver_installation:: ../../../c/driver/sqlite/README.md
Usage
=====
diff --git a/docs/source/driver/status.rst b/docs/source/driver/status.rst
index a32db56a9..141438f32 100644
--- a/docs/source/driver/status.rst
+++ b/docs/source/driver/status.rst
@@ -25,90 +25,45 @@ Driver Implementation Status
details, see `GH-1841
<https://github.com/apache/arrow-adbc/issues/1841>`_.
-Implementation Status
-=====================
-
-**Experimental** drivers are not feature-complete and the implementation is
still progressing.
-**Beta** drivers are (mostly) feature-complete but have only been available
for a short time.
-**Stable** drivers are (mostly) feature-complete (as much as possible for the
underlying database) and have been available/tested for a while.
-
.. note::
Drivers that support C/C++ can also be used from C#, GLib, Go, Python, R,
Ruby, and Rust, regardless of their implementation language.
-.. list-table::
- :header-rows: 1
-
- * - Driver
- - Supported Languages
- - Implementation Language
- - Status
-
- * - Apache DataFusion
- - Rust
- - Rust
- - Experimental
-
- * - BigQuery (C#)
- - C#
- - C#
- - Experimental
-
- * - BigQuery (Go)
- - C/C++
- - Go
- - Experimental
-
- * - DuckDB [#duckdb]_
- - C/C++
- - C++
- - Stable
-
- * - Flight SQL (Go)
- - C/C++, C# [#wrapper]_
- - Go
- - Stable
-
- * - Flight SQL (Java)
- - Java
- - Java
- - Experimental
-
- * - JDBC Adapter
- - Java
- - Java
- - Experimental
+.. _driver-status:
- * - PostgreSQL
- - C/C++
- - C++
- - Stable
+Driver Status
+=============
- * - SQLite
- - C/C++
- - C
- - Stable
+**Experimental** drivers are not feature-complete and the implementation is
still progressing.
+**Beta** drivers are (mostly) feature-complete but have only been available
for a short time.
+**Stable** drivers are (mostly) feature-complete (as much as possible for the
underlying database) and have been available/tested for a while.
- * - Snowflake
- - C/C++, Rust [#wrapper]_
- - Go
- - Stable
+Feature Support
+===============
- * - Thrift protocol-based [#thrift]_
- - C#
- - C#
- - Experimental
+.. adbc_driver_status_table::
+
+ ../../../c/driver/bigquery/README.md
+ ../../../csharp/src/Drivers/Apache/Hive2/README.md
+ ../../../csharp/src/Drivers/Apache/Impala/README.md
+ ../../../csharp/src/Drivers/Apache/Spark/README.md
+ ../../../csharp/src/Drivers/BigQuery/readme.md
+ ../../../csharp/src/Drivers/Databricks/readme.md
+ ../../../csharp/src/Drivers/FlightSql/README.md
+ ../../../java/driver/flight-sql/README.md
+ ../../../rust/driver/datafusion/README.md
+ ./duckdb => ../../../c/integration/duckdb/README.md [#duckdb]
+ ./flight_sql => ../../../c/driver/flightsql/README.md
+ ./jdbc => ../../../java/driver/jdbc/README.md
+ ./postgresql => ../../../c/driver/postgresql/README.md
+ ./snowflake => ../../../c/driver/snowflake/README.md
+ ./sqlite => ../../../c/driver/sqlite/README.md
.. [#duckdb] DuckDB is developed and provided by a third party. See the
`DuckDB documentation
<https://duckdb.org/docs/stable/clients/adbc.html>`_ for details.
-.. [#thrift] Supports Apache Hive/Impala/Spark.
-
-.. [#wrapper] Listed separately because a wrapper package is provided that
- combines the driver and the bindings for you.
-
Feature Support
===============
diff --git a/docs/source/ext/adbc_misc.py b/docs/source/ext/adbc_misc.py
new file mode 100644
index 000000000..827ac4aec
--- /dev/null
+++ b/docs/source/ext/adbc_misc.py
@@ -0,0 +1,370 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Misc directives for the ADBC docs."""
+
+import collections
+import dataclasses
+import functools
+import itertools
+import typing
+from pathlib import Path
+
+import docutils
+import sphinx
+from docutils.statemachine import StringList
+from sphinx.util.docutils import SphinxDirective
+from sphinx.util.nodes import nested_parse_with_titles
+from sphinx.util.typing import OptionSpec
+
+LOGGER = sphinx.util.logging.getLogger(__name__)
+
+# conda-forge is handled specially
+_REPO_TO_LANGUAGE = {
+ "CRAN": "R",
+ "crates.io": "Rust",
+ "Go": "Go",
+ "Maven": "Java",
+ "NuGet": "C#",
+ "RubyGems": "Ruby",
+ "R-multiverse": "R",
+ "PyPI": "Python",
+}
+
+_LANGUAGE_TO_KEY = {
+ "C/C++": "cpp",
+ "C#": "csharp",
+}
+
+
[email protected](frozen=True)
+class DriverStatus:
+ vendor: str
+ implementation: str
+ status: typing.Literal["Experimental", "Beta", "Stable"]
+ packages: typing.List[typing.Tuple[str, str, str]] # (repo, package, URL)
+
+ @property
+ def badge_type(self) -> str:
+ if self.status == "Experimental":
+ return "danger"
+ elif self.status == "Beta":
+ return "warning"
+ elif self.status == "Stable":
+ return "success"
+ else:
+ raise ValueError(f"Unknown status {self.status} for
{self.implementation}")
+
+
[email protected]
+def _driver_status(path: Path) -> DriverStatus:
+ # we could pull in a full markdown parser, but for now just munge the text
+ meta: typing.Dict[str, str] = {}
+ packages = []
+ with path.open() as source:
+ for line in source:
+ if "img.shields.io" in line:
+ before, _, after = line.partition("img.shields.io")
+ tag = before[before.index("![") + 2 :
before.index("]")].strip()
+ key, _, value = tag.partition(": ")
+ key = key.strip()
+ value = value.strip()
+
+ if key.lower() in {"vendor", "implementation", "status"}:
+ meta[key.lower()] = value
+ else:
+ repo = key
+ url = after[after.rfind("(") + 1 :
after.rfind(")")].strip()
+ packages.append((repo, value, url))
+ return DriverStatus(**meta, packages=packages)
+
+
+def driver_status(path: Path) -> DriverStatus:
+ return _driver_status(path.resolve())
+
+
+class DriverInstallationDirective(SphinxDirective):
+ has_content = False
+ required_arguments = 1
+ optional_arguments = 0
+ option_spec: OptionSpec = {}
+
+ def run(self):
+ rel_filename, filename = self.env.relfn2path(self.arguments[0])
+ self.env.note_dependency(rel_filename)
+
+ path = Path(filename).resolve()
+ status = driver_status(path)
+ is_native = status.implementation in {"C/C++", "C#", "Go", "Rust"}
+
+ generated_lines = []
+
+ if not status.packages:
+ generated_lines.append("No packages available; install from
source.")
+ else:
+ generated_lines.append(".. tab-set::")
+
+ # language : list of (repo, package, url)
+ languages = collections.defaultdict(list)
+
+ for i, (repo, package, url) in enumerate(status.packages):
+ language = None
+ if repo == "conda-forge":
+ if package.startswith("lib"):
+ language = "C/C++"
+ else:
+ language = "Python"
+ else:
+ language = _REPO_TO_LANGUAGE.get(repo)
+
+ if language is None:
+ LOGGER.warning(
+ f"Unknown language mapping for package repo {repo}",
+ type="adbc_misc",
+ )
+ continue
+
+ languages[language].append((repo, package, url))
+
+ if "Go" not in languages and is_native:
+ languages["Go"] = []
+
+ for language, packages in sorted(languages.items(), key=lambda x:
x[0]):
+ generated_lines.append("")
+ generated_lines.append(f" .. tab-item:: {language}")
+ generated_lines.append(
+ f" :sync: {_LANGUAGE_TO_KEY.get(language,
language.lower())}"
+ )
+ generated_lines.append("")
+
+ for repo, package, url in sorted(
+ packages, key=lambda x: (x[0].lower(), x[1])
+ ):
+ generated_lines.append(
+ f" Install `{package} <{url}>`__ from {repo}:"
+ )
+ generated_lines.append("")
+ if repo == "conda-forge":
+ generated_lines.append(" .. code-block:: shell")
+ generated_lines.append("")
+ generated_lines.append(f" mamba install
{package}")
+ elif repo == "crates.io":
+ generated_lines.append(" .. code-block:: shell")
+ generated_lines.append("")
+ generated_lines.append(f" cargo add {package}")
+ elif repo == "CRAN":
+ generated_lines.append(" .. code-block:: r")
+ generated_lines.append("")
+ generated_lines.append(
+ f' install.packages("{package}")'
+ )
+ elif repo == "Go":
+ generated_lines.append(" .. code-block:: shell")
+ generated_lines.append("")
+ generated_lines.append(f" go get {package}")
+ elif repo == "Maven":
+ group, artifact = package.split(":")
+ generated_lines.append(" .. code-block:: xml")
+ generated_lines.append("")
+ generated_lines.append(" <dependency>")
+ generated_lines.append(f"
<groupId>{group}</groupId>")
+ generated_lines.append(
+ f" <artifactId>{artifact}</artifactId>"
+ )
+ generated_lines.append(" </dependency>")
+ elif repo == "NuGet":
+ generated_lines.append(" .. code-block:: shell")
+ generated_lines.append("")
+ generated_lines.append(f" dotnet package add
{package}")
+ elif repo == "PyPI":
+ generated_lines.append(" .. code-block:: shell")
+ generated_lines.append("")
+ generated_lines.append(f" pip install
{package}")
+ elif repo == "R-multiverse":
+ generated_lines.append(" .. code-block:: r")
+ generated_lines.append("")
+ generated_lines.append(
+ f' install.packages("{package}", '
+ 'repos = "https://community.r-multiverse.org")'
+ )
+ else:
+ LOGGER.warning(f"Unknown package repo {repo}",
type="adbc_misc")
+ continue
+ generated_lines.append("")
+
+ if not packages and is_native:
+ if language == "Go":
+ generated_lines.append(
+ " Install the C/C++ driver, "
+ "then use the Go driver manager. "
+ "Requires CGO."
+ )
+ generated_lines.append("")
+ generated_lines.append(" .. code-block:: shell")
+ generated_lines.append("")
+ generated_lines.append(
+ " go get "
+ "github.com/apache/arrow-adbc/go/adbc/drivermgr"
+ )
+ else:
+ LOGGER.warning(
+ f"No packages and unknown language {language}",
+ type="adbc_misc",
+ )
+
+ if is_native:
+ generated_lines.append("")
+ generated_lines.append(
+ "Additionally, the driver may be used from C/C++, C#, GLib, "
+ "Go, R, Ruby, and Rust via the driver manager."
+ )
+
+ parsed = docutils.nodes.Element()
+ nested_parse_with_titles(
+ self.state,
+ StringList(generated_lines, source=""),
+ parsed,
+ )
+ return parsed.children
+
+
+class DriverStatusDirective(SphinxDirective):
+ has_content = False
+ required_arguments = 1
+ optional_arguments = 0
+ option_spec: OptionSpec = {}
+
+ def run(self):
+ rel_filename, filename = self.env.relfn2path(self.arguments[0])
+ self.env.note_dependency(rel_filename)
+
+ path = Path(filename).resolve()
+ status = driver_status(path)
+
+ generated_lines = [
+ f":bdg-primary:`Language: {status.implementation}`",
+ f":bdg-ref-{status.badge_type}:`Status: {status.status}
<driver-status>`",
+ ]
+
+ parsed = docutils.nodes.Element()
+ nested_parse_with_titles(
+ self.state,
+ StringList(generated_lines, source=""),
+ parsed,
+ )
+ return parsed.children
+
+
+class DriverStatusTableDirective(SphinxDirective):
+ has_content = True
+ required_arguments = 0
+ optional_arguments = 0
+ option_spec: OptionSpec = {}
+
+ def run(self):
+ table = []
+ for line in self.content:
+ if "=>" in line:
+ xref, _, path = line.partition("=>")
+ xref = xref.strip()
+ path = path.strip()
+ else:
+ xref = None
+ path = line.strip()
+
+ if "[#" in path:
+ footnote = path[path.index("[#") + 2 : -1].strip()
+ path = path[: path.index("[#")].strip()
+ else:
+ footnote = None
+
+ rel_filename, filename = self.env.relfn2path(path)
+ self.env.note_dependency(rel_filename)
+
+ path = Path(filename).resolve()
+ status = driver_status(path)
+ table.append((status, xref, footnote))
+
+ table.sort(key=lambda x: (x[0].vendor, x[0].implementation))
+
+ generated_lines = [
+ ".. list-table::",
+ " :header-rows: 1",
+ "",
+ " * - Vendor",
+ " - Implementation",
+ " - :ref:`driver-status`",
+ " - Packages [#packages]_",
+ "",
+ ]
+ for row in table:
+ if row[1]:
+ generated_lines.append(f" * - :doc:`{row[0].vendor}
<{row[1]}>`")
+ else:
+ generated_lines.append(f" * - {row[0].vendor}")
+
+ if row[2]:
+ generated_lines[-1] += f" [#{row[2]}]_"
+
+ generated_lines.append(f" - {row[0].implementation}")
+ generated_lines.append(f" - {row[0].status}")
+
+ generated_lines.append(" -")
+ packages = itertools.groupby(
+ sorted(row[0].packages, key=lambda x: x[0].lower()),
+ key=lambda x: x[0],
+ )
+ for repo, group in packages:
+ group = list(group)
+ if generated_lines[-1][-1] == "-":
+ generated_lines[-1] += " "
+ else:
+ generated_lines[-1] += ", "
+
+ if len(group) == 1:
+ generated_lines[-1] += f"`{repo} <{group[0][2]}>`__"
+ else:
+ links = ", ".join(f"`{pkg[1]} <{pkg[2]}>`__" for pkg in
group)
+ generated_lines[-1] += f"{repo} ({links})"
+ generated_lines.append("")
+
+ generated_lines.extend(
+ [
+ "",
+ ".. [#packages] This lists only packages available in package
repositories. However, as noted above, many of these drivers can be used from
languages not listed via the driver manager, even if a package is not yet
available.", # noqa:E501
+ ]
+ )
+
+ parsed = docutils.nodes.Element()
+ nested_parse_with_titles(
+ self.state,
+ StringList(generated_lines, source=""),
+ parsed,
+ )
+ return parsed.children
+
+
+def setup(app) -> None:
+ app.add_directive("adbc_driver_installation", DriverInstallationDirective)
+ app.add_directive("adbc_driver_status", DriverStatusDirective)
+ app.add_directive("adbc_driver_status_table", DriverStatusTableDirective)
+
+ return {
+ "version": "0.1",
+ "parallel_read_safe": True,
+ "parallel_write_safe": True,
+ }
diff --git a/c/driver/snowflake/README.md b/java/driver/flight-sql/README.md
similarity index 55%
copy from c/driver/snowflake/README.md
copy to java/driver/flight-sql/README.md
index 75c67d3ee..f8efbf865 100644
--- a/c/driver/snowflake/README.md
+++ b/java/driver/flight-sql/README.md
@@ -17,15 +17,17 @@
under the License.
-->
-# ADBC Snowflake Driver
+# ADBC Arrow Flight SQL Driver
-This driver provides an interface to
-[Snowflake](https://www.snowflake.com/) using ADBC.
+
+
+
-## Building
+[](https://mvnrepository.com/artifact/org.apache.arrow.adbc/adbc-driver-flight-sql)
-See [CONTRIBUTING.md](../../../CONTRIBUTING.md) for details.
+This driver provides an interface to databases supporting
+[Apache Arrow Flight SQL](https://arrow.apache.org/docs/format/FlightSql.html)
using ADBC.
-## Testing
+## Building
-Snowflake credentials are required.
+See [CONTRIBUTING.md](../../../CONTRIBUTING.md) for details.
diff --git a/c/driver/snowflake/README.md b/java/driver/jdbc/README.md
similarity index 58%
copy from c/driver/snowflake/README.md
copy to java/driver/jdbc/README.md
index 75c67d3ee..e59c5b611 100644
--- a/c/driver/snowflake/README.md
+++ b/java/driver/jdbc/README.md
@@ -17,15 +17,17 @@
under the License.
-->
-# ADBC Snowflake Driver
+# ADBC JDBC Adapter
-This driver provides an interface to
-[Snowflake](https://www.snowflake.com/) using ADBC.
+
+
+
-## Building
+[](https://mvnrepository.com/artifact/org.apache.arrow.adbc/adbc-driver-jdbc)
-See [CONTRIBUTING.md](../../../CONTRIBUTING.md) for details.
+This driver provides an interface to databases supporting
+[Apache Arrow Flight SQL](https://arrow.apache.org/docs/format/FlightSql.html)
using ADBC.
-## Testing
+## Building
-Snowflake credentials are required.
+See [CONTRIBUTING.md](../../../CONTRIBUTING.md) for details.
diff --git a/python/adbc_driver_bigquery/adbc_driver_bigquery/__init__.py
b/python/adbc_driver_bigquery/adbc_driver_bigquery/__init__.py
index ce5351ea8..ce413e3df 100644
--- a/python/adbc_driver_bigquery/adbc_driver_bigquery/__init__.py
+++ b/python/adbc_driver_bigquery/adbc_driver_bigquery/__init__.py
@@ -101,10 +101,10 @@ class StatementOptions(enum.Enum):
#: The following values are supported:
#:
#: * ``CREATE_IF_NEEDED``:
- #: Will create the table if it does not already exist.
- #: Tables are created atomically on successful completion of a job.
+ #: Will create the table if it does not already exist.
+ #: Tables are created atomically on successful completion of a job.
#: * ``CREATE_NEVER``:
- #: Ensures the table must already exist and will not be automatically
created.
+ #: Ensures the table must already exist and will not be automatically
created.
CREATE_DISPOSITION = "adbc.bigquery.sql.query.create_disposition"
#: WRITE_DISPOSITION specifies how existing data in the destination
@@ -114,13 +114,13 @@ class StatementOptions(enum.Enum):
#: The following values are supported:
#:
#: * ``WRITE_APPEND``:
- #: Will append to any existing data in the destination table.
- #: Data is appended atomically on successful completion of a job.
+ #: Will append to any existing data in the destination table.
+ #: Data is appended atomically on successful completion of a job.
#: * ``WRITE_TRUNCATE``:
- #: Overrides the existing data in the destination table.
- #: Data is overwritten atomically on successful completion of a job.
+ #: Overrides the existing data in the destination table.
+ #: Data is overwritten atomically on successful completion of a job.
#: * ``WRITE_EMPTY``:
- #: Fails writes if the destination table already contains data.
+ #: Fails writes if the destination table already contains data.
WRITE_DISPOSITION = "adbc.bigquery.sql.query.write_disposition"
#: DISABLE_QUERY_CACHE prevents results being fetched from the query cache.
@@ -162,21 +162,21 @@ class StatementOptions(enum.Enum):
#: The following values are supported:
#:
#: * ``BATCH``:
- #: BatchPriority specifies that the query should be scheduled with the
- #: batch priority. BigQuery queues each batch query on your behalf, and
- #: starts the query as soon as idle resources are available, usually
- #: within a few minutes. If BigQuery hasn't started the query within 24
- #: hours, BigQuery changes the job priority to interactive. Batch
queries
- #: don't count towards your concurrent rate limit, which can make it
- #: easier to start many queries at once. More information can be found
at:
- #: https://cloud.google.com/bigquery/docs/running-queries#batchqueries
+ #: BatchPriority specifies that the query should be scheduled with the
+ #: batch priority. BigQuery queues each batch query on your behalf, and
+ #: starts the query as soon as idle resources are available, usually
+ #: within a few minutes. If BigQuery hasn't started the query within 24
+ #: hours, BigQuery changes the job priority to interactive. Batch queries
+ #: don't count towards your concurrent rate limit, which can make it
+ #: easier to start many queries at once. More information can be found
at:
+ #: https://cloud.google.com/bigquery/docs/running-queries#batchqueries
#: * ``INTERACTIVE``:
- #: Specifies that the query should be scheduled with interactive
priority,
- #: which means that the query is executed as soon as possible.
Interactive
- #: queries count towards your concurrent rate limit and your daily
limit.
- #: It is the default priority with which queries get executed. More
- #: information can be found at:
- #: https://cloud.google.com/bigquery/docs/running-queries#queries
+ #: Specifies that the query should be scheduled with interactive
priority,
+ #: which means that the query is executed as soon as possible.
Interactive
+ #: queries count towards your concurrent rate limit and your daily limit.
+ #: It is the default priority with which queries get executed. More
+ #: information can be found at:
+ #: https://cloud.google.com/bigquery/docs/running-queries#queries
PRIORITY = "adbc.bigquery.sql.query.priority"
#: MAX_BILLING_TIER sets the maximum billing tier for a Query.
diff --git a/ruby/README.md b/ruby/README.md
index 5a99d3c57..4a61734e1 100644
--- a/ruby/README.md
+++ b/ruby/README.md
@@ -19,6 +19,8 @@
# Red ADBC
+[](https://rubygems.org/gems/red-adbc)
+
Red ADBC is the Ruby bindings of ADBC GLib.
## How to install
diff --git a/rust/driver/datafusion/README.md b/rust/driver/datafusion/README.md
index 63bff721e..ed5bb9d4d 100644
--- a/rust/driver/datafusion/README.md
+++ b/rust/driver/datafusion/README.md
@@ -19,6 +19,12 @@
# ADBC Driver for Apache DataFusion
+
+
+
+
+[](https://crates.io/crates/adbc_datafusion)
+
## Example Usage
```