This is an automated email from the ASF dual-hosted git repository.

jshao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/gravitino.git


The following commit(s) were added to refs/heads/main by this push:
     new 7f4dec3b0e [MINOR] improvement(docs):  Add Engine Connectivity section 
with Iceberg REST engine pages (#10778)
7f4dec3b0e is described below

commit 7f4dec3b0e26ccd8678d8148d9bf39610ecfb10a
Author: Mark Hoerth <[email protected]>
AuthorDate: Wed Apr 15 01:02:46 2026 -0700

    [MINOR] improvement(docs):  Add Engine Connectivity section with Iceberg 
REST engine pages (#10778)
    
    ### What changes were proposed in this pull request?
    
    - Adds new `iceberg-rest-engine/` directory with validated connection
    pages for Trino, Spark, Flink, Doris, StarRocks, and PyIceberg
    - Extracts engine connectivity content from `iceberg-rest-service.md`
    into dedicated per-engine pages
    - Content was previously buried and undiscoverable in the monolithic
    iceberg-rest-service page
    
    ### Why are the changes needed?
    
    Users connecting Trino, Spark, Flink, and other engines to the Gravitino
    Iceberg REST catalog had no dedicated, findable documentation. This
    restructures the content into focused per-engine pages.
    
    ### Does this PR introduce any user-facing change?
    
    Yes — new documentation pages added under `docs/iceberg-rest-engine/`.
    
    ### How was this patch tested?
    
    All Trino, Spark, and Flink IRC configurations validated against
    gravitino-irc-quickstart.
---
 docs/iceberg-rest-engine/doris.md     |  58 +++++++++
 docs/iceberg-rest-engine/flink.md     | 171 +++++++++++++++++++++++++
 docs/iceberg-rest-engine/pyiceberg.md | 112 +++++++++++++++++
 docs/iceberg-rest-engine/ray.md       | 100 +++++++++++++++
 docs/iceberg-rest-engine/spark.md     | 227 ++++++++++++++++++++++++++++++++++
 docs/iceberg-rest-engine/starrocks.md |  66 ++++++++++
 docs/iceberg-rest-engine/trino.md     | 175 ++++++++++++++++++++++++++
 docs/iceberg-rest-service.md          | 226 ---------------------------------
 8 files changed, 909 insertions(+), 226 deletions(-)

diff --git a/docs/iceberg-rest-engine/doris.md 
b/docs/iceberg-rest-engine/doris.md
new file mode 100755
index 0000000000..e662077e81
--- /dev/null
+++ b/docs/iceberg-rest-engine/doris.md
@@ -0,0 +1,58 @@
+---
+title: Connect Doris via Iceberg REST
+sidebar_label: Doris
+---
+
+# Connecting Apache Doris via Iceberg REST
+
+Apache Gravitino exposes an [Iceberg REST catalog](../iceberg-rest-service.md) 
endpoint that any
+Iceberg-compatible engine can connect to directly. This page describes how to 
configure Apache Doris
+to use Gravitino's Iceberg REST (IRC) endpoint.
+
+## Prerequisites
+
+- Apache Gravitino running with the Iceberg REST service enabled. See
+  [Iceberg REST catalog service](../iceberg-rest-service.md) for setup 
instructions.
+- The Gravitino IRC endpoint is accessible from your Doris environment. The 
default port is `9001`.
+
+## Configuration
+
+Create an Iceberg catalog in Doris pointing at the Gravitino IRC endpoint:
+
+```sql
+CREATE CATALOG iceberg PROPERTIES (
+    "uri"                  = "http://<gravitino-host>:9001/iceberg/",
+    "type"                 = "iceberg",
+    "iceberg.catalog.type" = "rest",
+    "s3.endpoint"          = "http://s3.<region>.amazonaws.com",
+    "s3.region"            = "<region>",
+    "s3.access_key"        = "<access-key>",
+    "s3.secret_key"        = "<secret-key>"
+);
+```
+
+## Usage examples
+
+```sql
+SWITCH iceberg;
+CREATE DATABASE db;
+USE db;
+CREATE TABLE t(a int);
+INSERT INTO t VALUES (1);
+SELECT * FROM t;
+```
+
+## Gravitino connector vs Iceberg REST
+
+| Feature                  | Gravitino Engine Connector  | Iceberg REST        
          |
+|:-------------------------|:----------------------------|:------------------------------|
+| Engine plugin required   | Yes                         | No                  
          |
+| Gravitino access control | Yes                         | Yes                 
          |
+| Supported engines        | Trino, Spark, Flink, Daft   | Any 
Iceberg-compatible engine |
+| Credential vending       | Varies                      | Yes (S3, GCS, OSS, 
ADLS)      |
+
+## Related
+
+- [Iceberg REST catalog service](../iceberg-rest-service.md)
+- [Connect Spark via Iceberg REST](./spark.md)
+- [Connect Trino via Iceberg REST](./trino.md)
diff --git a/docs/iceberg-rest-engine/flink.md 
b/docs/iceberg-rest-engine/flink.md
new file mode 100755
index 0000000000..7efc54809e
--- /dev/null
+++ b/docs/iceberg-rest-engine/flink.md
@@ -0,0 +1,171 @@
+---
+title: Connect Flink via Iceberg REST
+sidebar_label: Flink
+---
+
+# Connecting Apache Flink via Iceberg REST
+
+Apache Gravitino exposes an [Iceberg REST catalog](../iceberg-rest-service.md) 
endpoint that any
+Iceberg-compatible engine can connect to directly — without installing a 
Gravitino-specific
+connector plugin. This page describes how to configure Apache Flink to use 
Gravitino's Iceberg REST
+(IRC) endpoint.
+
+:::note
+This integration uses the standard Apache Iceberg REST catalog specification. 
Gravitino enforces
+its full access-control model on all IRC requests.
+:::
+
+## Prerequisites
+
+- Apache Gravitino running with the Iceberg REST service enabled. See
+  [Iceberg REST catalog service](../iceberg-rest-service.md) for setup 
instructions.
+- The Gravitino IRC endpoint is accessible from your Flink environment. The 
default port is `9001`.
+- The following JAR files on your Flink classpath:
+  - `iceberg-flink-runtime-1.18-1.7.1.jar` (or 
`iceberg-flink-runtime-1.19-1.7.1.jar` for Flink 1.19)
+  - `iceberg-aws-bundle-1.7.1.jar`
+  - `flink-shaded-hadoop-2-uber.jar`
+
+This page uses **Flink 1.18** with **Iceberg 1.7.1**.
+
+:::note
+Unlike Spark and Trino, Flink requires S3 connection properties to be 
specified in the catalog
+definition itself rather than in a separate configuration file.
+:::
+
+## Configuration
+
+Flink uses a cluster configuration file (`flink-conf.yaml`) for general 
settings. The Iceberg
+catalog is registered per-session using a `CREATE CATALOG` SQL statement.
+
+### flink-conf.yaml
+
+Set batch execution mode and result display in 
`$FLINK_HOME/conf/flink-conf.yaml`:
+
+```yaml
+execution.runtime-mode: batch
+sql-client.execution.result-mode: tableau
+```
+
+:::tip
+`tableau` mode prints query results inline in the terminal. Without it, Flink 
SQL Client opens
+results in a full-screen pager.
+:::
+
+### Starting the Flink SQL Client
+
+```bash
+$FLINK_HOME/bin/sql-client.sh
+```
+
+## Registering the catalog
+
+At the Flink SQL Client prompt, run the following `CREATE CATALOG` statement. 
Replace
+`<gravitino-host>` with your Gravitino server address and supply your S3 
credentials.
+
+### Without authentication
+
+```sql
+CREATE CATALOG gravitino_irc WITH (
+  'type'                 = 'iceberg',
+  'catalog-type'         = 'rest',
+  'uri'                  = 'http://<gravitino-host>:9001/iceberg',
+  'io-impl'              = 'org.apache.iceberg.aws.s3.S3FileIO',
+  's3.region'            = 'us-east-1',
+  's3.access-key-id'     = '<access-key>',
+  's3.secret-access-key' = '<secret-key>'
+);
+```
+
+### With OAuth2 authentication
+
+```sql
+CREATE CATALOG gravitino_irc WITH (
+  'type'                 = 'iceberg',
+  'catalog-type'         = 'rest',
+  'uri'                  = 'http://<gravitino-host>:9001/iceberg',
+  'rest.auth.type'       = 'oauth2',
+  'rest.auth.oauth2.token' = '<your-token>',
+  'io-impl'              = 'org.apache.iceberg.aws.s3.S3FileIO',
+  's3.region'            = 'us-east-1',
+  's3.access-key-id'     = '<access-key>',
+  's3.secret-access-key' = '<secret-key>'
+);
+```
+
+See [How to authenticate](../security/how-to-authenticate.md) for Gravitino 
authentication
+configuration options.
+
+:::note
+The catalog registration persists for the duration of the SQL Client session. 
You must re-run
+`CREATE CATALOG` each time you start a new session.
+:::
+
+:::tip Local development
+For local development with MinIO, add the following S3 properties to the 
catalog definition:
+
+```sql
+  's3.endpoint'          = 'http://<minio-host>:9000',
+  's3.path-style-access' = 'true',
+```
+
+See 
[gravitino-irc-quickstart](https://github.com/markhoerth/gravitino-irc-quickstart)
 for a
+complete local development environment using MinIO.
+:::
+
+## Usage examples
+
+### Use the catalog
+
+```sql
+USE CATALOG gravitino_irc;
+```
+
+### List databases
+
+```sql
+SHOW DATABASES;
+```
+
+### List tables
+
+```sql
+SHOW TABLES IN <namespace>;
+```
+
+### Query a table
+
+```sql
+SELECT * FROM <namespace>.<table>;
+```
+
+### Create a table
+
+```sql
+CREATE TABLE gravitino_irc.<namespace>.new_table (
+  id INT,
+  name STRING,
+  created_at TIMESTAMP
+);
+```
+
+### Insert data
+
+```sql
+INSERT INTO gravitino_irc.<namespace>.new_table VALUES (1, 'example', 
CURRENT_TIMESTAMP);
+```
+
+## Gravitino connector vs Iceberg REST
+
+| Feature                  | Gravitino Engine Connector  | Iceberg REST        
          |
+|:-------------------------|:----------------------------|:------------------------------|
+| Engine plugin required   | Yes                         | No                  
          |
+| Gravitino access control | Yes                         | Yes                 
          |
+| Supported engines        | Trino, Spark, Flink, Daft   | Any 
Iceberg-compatible engine |
+| Credential vending       | Varies                      | Yes (S3, GCS, OSS, 
ADLS)      |
+
+## Related
+
+- [Iceberg REST catalog service](../iceberg-rest-service.md)
+- [Connect Spark via Iceberg REST](./spark.md)
+- [Connect Trino via Iceberg REST](./trino.md)
+- [Flink Gravitino connector](../flink-connector/flink-connector.md)
diff --git a/docs/iceberg-rest-engine/pyiceberg.md 
b/docs/iceberg-rest-engine/pyiceberg.md
new file mode 100755
index 0000000000..99e2fddb44
--- /dev/null
+++ b/docs/iceberg-rest-engine/pyiceberg.md
@@ -0,0 +1,112 @@
+---
+title: Connect PyIceberg via Iceberg REST
+sidebar_label: PyIceberg
+---
+
+# Connecting PyIceberg via Iceberg REST
+
+Apache Gravitino exposes an [Iceberg REST catalog](../iceberg-rest-service.md) 
endpoint that any
+Iceberg-compatible client can connect to directly. This page describes how to 
use PyIceberg with
+Gravitino's Iceberg REST (IRC) endpoint.
+
+## Prerequisites
+
+- Apache Gravitino running with the Iceberg REST service enabled. See
+  [Iceberg REST catalog service](../iceberg-rest-service.md) for setup 
instructions.
+- The Gravitino IRC endpoint is accessible from your Python environment. The 
default port is `9001`.
+- PyIceberg installed: `pip install pyiceberg`
+
+## Configuration
+
+```python
+from pyiceberg.catalog import load_catalog
+
+catalog = load_catalog(
+    "gravitino_irc",
+    **{
+        "type": "rest",
+        "uri":  "http://<gravitino-host>:9001/iceberg",
+    }
+)
+```
+
+### With credential vending
+
+```python
+catalog = load_catalog(
+    "gravitino_irc",
+    **{
+        "type":                            "rest",
+        "uri":                             
"http://<gravitino-host>:9001/iceberg",
+        "header.X-Iceberg-Access-Delegation": "vended-credentials",
+    }
+)
+```
+
+### With OAuth2 authentication
+
+```python
+catalog = load_catalog(
+    "gravitino_irc",
+    **{
+        "type":  "rest",
+        "uri":   "http://<gravitino-host>:9001/iceberg",
+        "token": "<your-token>",
+    }
+)
+```
+
+See [How to authenticate](../security/how-to-authenticate.md) for Gravitino 
authentication
+configuration options.
+
+## Usage examples
+
+### List namespaces
+
+```python
+catalog.list_namespaces()
+```
+
+### Load a table
+
+```python
+table = catalog.load_table("db.table")
+print(table.schema())
+```
+
+### Scan a table
+
+```python
+df = table.scan().to_arrow()
+print(df)
+```
+
+### Create a namespace and table
+
+```python
+catalog.create_namespace("db")
+
+from pyiceberg.schema import Schema
+from pyiceberg.types import NestedField, LongType, StringType
+
+schema = Schema(
+    NestedField(1, "id",   LongType(),   required=True),
+    NestedField(2, "name", StringType(), required=False),
+)
+catalog.create_table("db.new_table", schema=schema)
+```
+
+## Gravitino connector vs Iceberg REST
+
+| Feature                  | Gravitino Engine Connector  | Iceberg REST        
          |
+|:-------------------------|:----------------------------|:------------------------------|
+| Engine plugin required   | Yes                         | No                  
          |
+| Gravitino access control | Yes                         | Yes                 
          |
+| Supported engines        | Trino, Spark, Flink, Daft   | Any 
Iceberg-compatible engine |
+| Credential vending       | Varies                      | Yes (S3, GCS, OSS, 
ADLS)      |
+
+## Related
+
+- [Iceberg REST catalog service](../iceberg-rest-service.md)
+- [Connect Spark via Iceberg REST](./spark.md)
+- [Connect Flink via Iceberg REST](./flink.md)
diff --git a/docs/iceberg-rest-engine/ray.md b/docs/iceberg-rest-engine/ray.md
new file mode 100644
index 0000000000..c75a315b28
--- /dev/null
+++ b/docs/iceberg-rest-engine/ray.md
@@ -0,0 +1,100 @@
+---
+title: Connect Ray via Iceberg REST
+sidebar_label: Ray
+---
+
+# Connecting Ray via Iceberg REST
+
+Apache Gravitino exposes an [Iceberg REST catalog](../iceberg-rest-service.md) 
endpoint that any
+Iceberg-compatible client can connect to directly. This page describes how to 
use Ray Data with
+Gravitino's Iceberg REST (IRC) endpoint.
+
+:::note
+Ray Data only supports reading from and writing to existing Iceberg tables. It 
does not support
+DDL operations such as creating, dropping, or altering tables, schemas, or 
catalogs. Use Spark
+or PyIceberg to manage table metadata.
+:::
+
+## Prerequisites
+
+- Apache Gravitino running with the Iceberg REST service enabled. See
+  [Iceberg REST catalog service](../iceberg-rest-service.md) for setup 
instructions.
+- The Gravitino IRC endpoint is accessible from your Python environment. The 
default port is `9001`.
+- Ray installed: `pip install ray[data]`
+
+## Configuration
+
+Ray Data connects to the Gravitino IRC endpoint via `catalog_kwargs` passed 
directly to the
+read/write functions. No separate catalog registration is required.
+
+### Without authentication
+
+```python
+catalog_kwargs = {
+    "name": "default",
+    "type": "rest",
+    "uri": "http://<gravitino-host>:9001/iceberg/",
+}
+```
+
+### With credential vending and Basic authentication
+
+```python
+catalog_kwargs = {
+    "name": "default",
+    "type": "rest",
+    "uri": "http://<gravitino-host>:9001/iceberg/",
+    "header.X-Iceberg-Access-Delegation": "vended-credentials",
+    "auth": {
+        "type": "basic",
+        "basic": {"username": "<user>", "password": "<password>"}
+    }
+}
+```
+
+See [How to authenticate](../security/how-to-authenticate.md) for Gravitino 
authentication
+configuration options.
+
+## Usage examples
+
+### Write to an Iceberg table
+
+```python
+import ray
+import pandas as pd
+
+docs = [{"id": i, "data": f"Doc {i}"} for i in range(4)]
+ds = ray.data.from_pandas(pd.DataFrame(docs))
+ds.write_iceberg(
+    table_identifier="default.sample",
+    catalog_kwargs=catalog_kwargs
+)
+```
+
+### Read from an Iceberg table
+
+```python
+import ray
+
+ds = ray.data.read_iceberg(
+    table_identifier="default.sample",
+    catalog_kwargs=catalog_kwargs
+)
+ds.show(limit=1)
+```
+
+## Gravitino connector vs Iceberg REST
+
+| Feature                  | Gravitino Engine Connector  | Iceberg REST        
          |
+|:-------------------------|:----------------------------|:------------------------------|
+| Engine plugin required   | Yes                         | No                  
          |
+| Gravitino access control | Yes                         | Yes                 
          |
+| Supported engines        | Trino, Spark, Flink, Daft   | Any 
Iceberg-compatible engine |
+| Credential vending       | Varies                      | Yes (S3, GCS, OSS, 
ADLS)      |
+
+## Related
+
+- [Iceberg REST catalog service](../iceberg-rest-service.md)
+- [Connect PyIceberg via Iceberg REST](./pyiceberg.md)
+- [Connect Spark via Iceberg REST](./spark.md)
+
diff --git a/docs/iceberg-rest-engine/spark.md 
b/docs/iceberg-rest-engine/spark.md
new file mode 100755
index 0000000000..abdc8d7ada
--- /dev/null
+++ b/docs/iceberg-rest-engine/spark.md
@@ -0,0 +1,227 @@
+---
+title: Connect Spark via Iceberg REST
+sidebar_label: Spark
+---
+
+# Connecting Apache Spark via Iceberg REST
+
+Apache Gravitino exposes an [Iceberg REST catalog](../iceberg-rest-service.md) 
endpoint that any
+Iceberg-compatible engine can connect to directly — without installing a 
Gravitino-specific
+connector plugin. This page describes how to configure Apache Spark to use 
Gravitino's Iceberg REST
+(IRC) endpoint.
+
+:::note
+This integration uses the standard Apache Iceberg REST catalog specification. 
Gravitino enforces
+its full access-control model on all IRC requests. Per-user identity 
propagation from the engine is
+planned for a future release; current requests are authorized using the 
credentials supplied in the
+Spark configuration.
+:::
+
+## Prerequisites
+
+- Apache Gravitino running with the Iceberg REST service enabled. See
+  [Iceberg REST catalog service](../iceberg-rest-service.md) for setup 
instructions.
+- The Gravitino IRC endpoint is accessible from your Spark environment. The 
default port is `9001`.
+- The following JAR files available in your Spark environment:
+  - `iceberg-spark-runtime-3.5_2.12-1.7.1.jar`
+  - `hadoop-aws-3.3.4.jar`
+  - `aws-bundle-2.29.38.jar`
+
+This page uses **Spark 3.5.3** with **Iceberg 1.7.1**. For other versions, 
ensure compatibility
+between Spark, Scala, and Iceberg runtime versions.
+
+## Configuration
+
+`spark-defaults.conf` is Spark's persistent configuration file. Properties set 
here are
+automatically applied to every Spark session — no command-line flags needed. 
The file lives at:
+
+```
+$SPARK_HOME/conf/spark-defaults.conf
+```
+
+If the file doesn't exist yet, copy the template:
+
+```bash
+cp $SPARK_HOME/conf/spark-defaults.conf.template 
$SPARK_HOME/conf/spark-defaults.conf
+```
+
+### Simple authentication
+
+Add the following to `$SPARK_HOME/conf/spark-defaults.conf`:
+
+```properties
+# Iceberg extensions
+spark.sql.extensions                                    
org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
+
+# Gravitino IRC catalog
+spark.sql.catalog.gravitino_irc                         
org.apache.iceberg.spark.SparkCatalog
+spark.sql.catalog.gravitino_irc.type                    rest
+spark.sql.catalog.gravitino_irc.uri                     
http://<gravitino-host>:9001/iceberg
+
+# S3 FileIO
+spark.sql.catalog.gravitino_irc.io-impl                 
org.apache.iceberg.aws.s3.S3FileIO
+spark.sql.catalog.gravitino_irc.s3.region               us-east-1
+spark.sql.catalog.gravitino_irc.s3.access-key-id        <access-key>
+spark.sql.catalog.gravitino_irc.s3.secret-access-key    <secret-key>
+
+# Hadoop S3A (for s3a:// paths)
+spark.hadoop.fs.s3a.impl                                
org.apache.hadoop.fs.s3a.S3AFileSystem
+
+# Set as default catalog (optional)
+spark.sql.defaultCatalog                                gravitino_irc
+```
+
+:::note
+`gravitino_irc` is the catalog identifier used within Spark. It maps to the 
Gravitino IRC endpoint
+via the `uri` property. You may use any identifier you prefer. S3 credentials 
can alternatively
+be supplied via environment variables (`AWS_ACCESS_KEY_ID` / 
`AWS_SECRET_ACCESS_KEY`) or an IAM
+instance profile, in which case the explicit credential lines can be omitted.
+:::
+
+### With OAuth2 authentication
+
+If Gravitino is configured with OAuth2, add the auth properties to the same
+`$SPARK_HOME/conf/spark-defaults.conf` file:
+
+```properties
+# Iceberg extensions
+spark.sql.extensions                                    
org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
+
+# Gravitino IRC catalog
+spark.sql.catalog.gravitino_irc                         
org.apache.iceberg.spark.SparkCatalog
+spark.sql.catalog.gravitino_irc.type                    rest
+spark.sql.catalog.gravitino_irc.uri                     
http://<gravitino-host>:9001/iceberg
+
+# OAuth2 authentication
+spark.sql.catalog.gravitino_irc.rest.auth.type          oauth2
+spark.sql.catalog.gravitino_irc.rest.auth.oauth2.token  <your-token>
+
+# S3 FileIO
+spark.sql.catalog.gravitino_irc.io-impl                 
org.apache.iceberg.aws.s3.S3FileIO
+spark.sql.catalog.gravitino_irc.s3.region               us-east-1
+spark.sql.catalog.gravitino_irc.s3.access-key-id        <access-key>
+spark.sql.catalog.gravitino_irc.s3.secret-access-key    <secret-key>
+
+# Hadoop S3A (for s3a:// paths)
+spark.hadoop.fs.s3a.impl                                
org.apache.hadoop.fs.s3a.S3AFileSystem
+
+# Set as default catalog (optional)
+spark.sql.defaultCatalog                                gravitino_irc
+```
+
+See [How to authenticate](../security/how-to-authenticate.md) for Gravitino 
authentication
+configuration options.
+
+:::tip Local development
+For local development, [MinIO](https://min.io) can be used as an S3-compatible 
storage backend.
+Replace the S3 FileIO section with:
+
+```properties
+spark.sql.catalog.gravitino_irc.io-impl                 
org.apache.iceberg.aws.s3.S3FileIO
+spark.sql.catalog.gravitino_irc.s3.endpoint             
http://<minio-host>:9000
+spark.sql.catalog.gravitino_irc.s3.path-style-access    true
+spark.sql.catalog.gravitino_irc.s3.access-key-id        <minio-access-key>
+spark.sql.catalog.gravitino_irc.s3.secret-access-key    <minio-secret-key>
+spark.hadoop.fs.s3a.impl                                
org.apache.hadoop.fs.s3a.S3AFileSystem
+spark.hadoop.fs.s3a.endpoint                            
http://<minio-host>:9000
+spark.hadoop.fs.s3a.path.style.access                   true
+spark.hadoop.fs.s3a.connection.ssl.enabled              false
+```
+
+See 
[gravitino-irc-quickstart](https://github.com/markhoerth/gravitino-irc-quickstart)
 for a
+complete local development environment using MinIO.
+:::
+
+### With credential vending
+
+If Gravitino is configured with credential vending, add the following to 
enable it on the client side:
+
+```properties
+spark.sql.catalog.gravitino_irc.header.X-Iceberg-Access-Delegation    
vended-credentials
+```
+
+See [Credential vending](../iceberg-rest-service.md#credential-vending) for 
server-side configuration.
+
+:::note
+For storage not managed by Gravitino, properties are not automatically 
transferred from the server
+to the client. Pass custom properties to initialize FileIO explicitly:
+
+```properties
+spark.sql.catalog.gravitino_irc.<configuration-key>    <property-value>
+```
+:::
+
+## Starting Spark
+
+Once `spark-defaults.conf` is in place, start your Spark session normally. The 
Gravitino IRC
+catalog is available immediately without any additional flags.
+
+### spark-shell (Scala)
+
+```bash
+$SPARK_HOME/bin/spark-shell
+```
+
+### spark-sql
+
+```bash
+$SPARK_HOME/bin/spark-sql
+```
+
+### pyspark
+
+```bash
+$SPARK_HOME/bin/pyspark
+```
+
+## Usage examples
+
+### List namespaces
+
+```sql
+SHOW NAMESPACES IN gravitino_irc;
+```
+
+### List tables
+
+```sql
+SHOW TABLES IN gravitino_irc.<namespace>;
+```
+
+### Query a table
+
+```sql
+SELECT * FROM gravitino_irc.<namespace>.<table> LIMIT 10;
+```
+
+### Create a table
+
+```sql
+CREATE TABLE gravitino_irc.<namespace>.new_table (
+  id INT,
+  name STRING,
+  created_at TIMESTAMP
+) USING iceberg;
+```
+
+### Insert data
+
+```sql
+INSERT INTO gravitino_irc.<namespace>.new_table VALUES (1, 'example', 
current_timestamp());
+```
+
+## Gravitino connector vs Iceberg REST
+
+| Feature                  | Gravitino Engine Connector  | Iceberg REST        
          |
+|:-------------------------|:----------------------------|:------------------------------|
+| Engine plugin required   | Yes                         | No                  
          |
+| Gravitino access control | Yes                         | Yes                 
          |
+| Supported engines        | Trino, Spark, Flink, Daft   | Any 
Iceberg-compatible engine |
+| Credential vending       | Varies                      | Yes (S3, GCS, OSS, 
ADLS)      |
+
+## Related
+
+- [Iceberg REST catalog service](../iceberg-rest-service.md)
+- [Connect Trino via Iceberg REST](./trino.md)
+- [Connect Flink via Iceberg REST](./flink.md)
+- [Spark Gravitino connector](../spark-connector/spark-connector.md)
diff --git a/docs/iceberg-rest-engine/starrocks.md 
b/docs/iceberg-rest-engine/starrocks.md
new file mode 100755
index 0000000000..4eff1cfb97
--- /dev/null
+++ b/docs/iceberg-rest-engine/starrocks.md
@@ -0,0 +1,66 @@
+---
+title: Connect StarRocks via Iceberg REST
+sidebar_label: StarRocks
+---
+
+# Connecting StarRocks via Iceberg REST
+
+Apache Gravitino exposes an [Iceberg REST catalog](../iceberg-rest-service.md) 
endpoint that any
+Iceberg-compatible engine can connect to directly. This page describes how to 
configure StarRocks
+to use Gravitino's Iceberg REST (IRC) endpoint.
+
+## Prerequisites
+
+- Apache Gravitino running with the Iceberg REST service enabled. See
+  [Iceberg REST catalog service](../iceberg-rest-service.md) for setup 
instructions.
+- The Gravitino IRC endpoint is accessible from your StarRocks environment. 
The default port is `9001`.
+
+## Configuration
+
+Create an external Iceberg catalog in StarRocks pointing at the Gravitino IRC 
endpoint:
+
+```sql
+CREATE EXTERNAL CATALOG iceberg
+COMMENT "Gravitino Iceberg REST catalog"
+PROPERTIES
+(
+  "type"                          = "iceberg",
+  "iceberg.catalog.type"          = "rest",
+  "iceberg.catalog.uri"           = "http://<gravitino-host>:9001/iceberg",
+  "aws.s3.access_key"             = "<access-key>",
+  "aws.s3.secret_key"             = "<secret-key>",
+  "aws.s3.endpoint"               = "http://<s3-host>:9000",
+  "aws.s3.enable_path_style_access" = "true",
+  "client.factory"                = 
"com.starrocks.connector.iceberg.IcebergAwsClientFactory"
+);
+```
+
+:::note
+`client.factory` must be set explicitly for StarRocks to correctly initialize 
the Iceberg AWS client.
+:::
+
+## Usage examples
+
+```sql
+SET CATALOG iceberg;
+CREATE DATABASE db;
+USE db;
+CREATE TABLE t(a int);
+INSERT INTO t VALUES (1);
+SELECT * FROM t;
+```
+
+## Gravitino connector vs Iceberg REST
+
+| Feature                  | Gravitino Engine Connector  | Iceberg REST        
          |
+|:-------------------------|:----------------------------|:------------------------------|
+| Engine plugin required   | Yes                         | No                  
          |
+| Gravitino access control | Yes                         | Yes                 
          |
+| Supported engines        | Trino, Spark, Flink, Daft   | Any 
Iceberg-compatible engine |
+| Credential vending       | Varies                      | Yes (S3, GCS, OSS, 
ADLS)      |
+
+## Related
+
+- [Iceberg REST catalog service](../iceberg-rest-service.md)
+- [Connect Spark via Iceberg REST](./spark.md)
+- [Connect Trino via Iceberg REST](./trino.md)
diff --git a/docs/iceberg-rest-engine/trino.md 
b/docs/iceberg-rest-engine/trino.md
new file mode 100755
index 0000000000..d03bd0b4e8
--- /dev/null
+++ b/docs/iceberg-rest-engine/trino.md
@@ -0,0 +1,175 @@
+---
+title: Connect Trino via Iceberg REST
+sidebar_label: Trino
+---
+
+# Connecting Trino via Iceberg REST
+
+Apache Gravitino exposes an [Iceberg REST catalog](../iceberg-rest-service.md) 
endpoint that any
+Iceberg-compatible engine can connect to directly — without installing a 
Gravitino-specific
+connector plugin. This page describes how to configure Trino to use 
Gravitino's Iceberg REST
+(IRC) endpoint.
+
+:::note
+This integration uses the standard Apache Iceberg REST catalog specification. 
Gravitino enforces
+its full access-control model on all IRC requests.
+:::
+
+## Prerequisites
+
+- Apache Gravitino running with the Iceberg REST service enabled. See
+  [Iceberg REST catalog service](../iceberg-rest-service.md) for setup 
instructions.
+- The Gravitino IRC endpoint is accessible from the Trino coordinator and all 
workers. The default
+  port is `9001`.
+- Trino 469 or later recommended.
+
+## Configuration
+
+Create a catalog properties file in your Trino `etc/catalog/` directory. The 
filename determines
+the catalog name in Trino — `gravitino_irc.properties` creates a catalog named 
`gravitino_irc`.
+
+:::note
+The `warehouse` property is managed by the Gravitino IRC server and does not 
need to be set in
+the Trino catalog configuration.
+:::
+
+### Without authentication
+
+```properties
+connector.name=iceberg
+iceberg.catalog.type=rest
+iceberg.rest-catalog.uri=http://<gravitino-host>:9001/iceberg
+
+# Native S3 filesystem (Trino 430+)
+fs.native-s3.enabled=true
+s3.region=us-east-1
+s3.aws-access-key=<access-key>
+s3.aws-secret-key=<secret-key>
+
+# Table defaults
+iceberg.file-format=PARQUET
+iceberg.compression-codec=ZSTD
+```
+
+### With OAuth2 authentication
+
+```properties
+connector.name=iceberg
+iceberg.catalog.type=rest
+iceberg.rest-catalog.uri=http://<gravitino-host>:9001/iceberg
+
+# OAuth2 authentication
+iceberg.rest-catalog.security=OAUTH2
+iceberg.rest-catalog.oauth2.token=<your-token>
+
+# Native S3 filesystem (Trino 430+)
+fs.native-s3.enabled=true
+s3.region=us-east-1
+s3.aws-access-key=<access-key>
+s3.aws-secret-key=<secret-key>
+
+# Table defaults
+iceberg.file-format=PARQUET
+iceberg.compression-codec=ZSTD
+```
+
+See [How to authenticate](../security/how-to-authenticate.md) for Gravitino 
authentication
+configuration options.
+
+:::tip Local development
+For local development with MinIO, replace the S3 section with:
+
+```properties
+fs.native-s3.enabled=true
+s3.endpoint=http://<minio-host>:9000
+s3.path-style-access=true
+s3.aws-access-key=<minio-access-key>
+s3.aws-secret-key=<minio-secret-key>
+s3.region=us-east-1
+```
+
+See 
[gravitino-irc-quickstart](https://github.com/markhoerth/gravitino-irc-quickstart)
 for a
+complete local development environment using MinIO.
+:::
+
+## Starting Trino
+
+Trino is a server process — the catalog is picked up automatically when Trino 
starts. After
+placing `gravitino_irc.properties` in `etc/catalog/`, restart Trino:
+
+```bash
+$TRINO_HOME/bin/launcher restart
+```
+
+Once Trino is running, connect using the Trino CLI:
+
+```bash
+trino --server http://<trino-host>:8080 --catalog gravitino_irc
+```
+
+Or connect without specifying a default catalog and qualify queries fully:
+
+```bash
+trino --server http://<trino-host>:8080
+```
+
+## Usage examples
+
+Once connected, use the Trino CLI or any Trino-compatible client.
+
+### List schemas
+
+```sql
+SHOW SCHEMAS FROM gravitino_irc;
+```
+
+### List tables
+
+```sql
+SHOW TABLES FROM gravitino_irc.<namespace>;
+```
+
+### Query a table
+
+```sql
+SELECT * FROM gravitino_irc.<namespace>.<table> LIMIT 10;
+```
+
+### Create a schema
+
+When creating a schema in Trino, a storage location must be specified:
+
+```sql
+CREATE SCHEMA gravitino_irc.<namespace>
+WITH (location = 's3://<bucket>/<namespace>/');
+```
+
+### Create a table
+
+```sql
+CREATE TABLE gravitino_irc.<namespace>.new_table (
+  id INTEGER,
+  name VARCHAR,
+  created_at TIMESTAMP
+)
+WITH (
+  format         = 'PARQUET',
+  format_version = 2
+);
+```
+
+## Gravitino connector vs Iceberg REST
+
+| Feature                  | Gravitino Engine Connector  | Iceberg REST        
          |
+|:-------------------------|:----------------------------|:------------------------------|
+| Engine plugin required   | Yes                         | No                  
          |
+| Gravitino access control | Yes                         | Yes                 
          |
+| Supported engines        | Trino, Spark, Flink, Daft   | Any 
Iceberg-compatible engine |
+| Credential vending       | Varies                      | Yes (S3, GCS, OSS, 
ADLS)      |
+
+## Related
+
+- [Iceberg REST catalog service](../iceberg-rest-service.md)
+- [Connect Spark via Iceberg REST](./spark.md)
+- [Connect Flink via Iceberg REST](./flink.md)
+- [Trino Gravitino connector](../trino-connector/trino-connector.md)
diff --git a/docs/iceberg-rest-service.md b/docs/iceberg-rest-service.md
index 59ac6529c4..1e437425f3 100644
--- a/docs/iceberg-rest-service.md
+++ b/docs/iceberg-rest-service.md
@@ -574,232 +574,6 @@ curl  http://127.0.0.1:9001/iceberg/v1/config
 
 Normally you will see the output like `{"defaults":{},"overrides":{}, 
"endpoints":["GET /v1/{prefix}/namespaces", ...]}%`.
 
-## Exploring the Apache Gravitino Iceberg REST catalog service with Apache 
Spark
-
-### Deploying Apache Spark with Apache Iceberg support
-
-Follow the [Spark Iceberg start 
guide](https://iceberg.apache.org/docs/1.10.0/spark-getting-started/) to set up 
Apache Spark's and Apache Iceberg's environment.
-
-### Starting the Apache Spark client with the Apache Iceberg REST catalog
-
-| Configuration item                       | Description                       
                                        |
-|------------------------------------------|---------------------------------------------------------------------------|
-| `spark.sql.catalog.${catalog-name}.type` | The Spark catalog type; should 
set to `rest`.                             |
-| `spark.sql.catalog.${catalog-name}.uri`  | Spark Iceberg REST catalog URI, 
such as `http://127.0.0.1:9001/iceberg/`. |
-
-For example, we can configure Spark catalog options to use Gravitino Iceberg 
REST catalog with the catalog name `rest`.
-
-```shell
-./bin/spark-sql -v \
---packages org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.3.1 \
---conf 
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
 \
---conf spark.sql.catalog.rest=org.apache.iceberg.spark.SparkCatalog  \
---conf spark.sql.catalog.rest.type=rest  \
---conf spark.sql.catalog.rest.uri=http://127.0.0.1:9001/iceberg/
-```
-
-You may need to adjust the Iceberg Spark runtime jar file name according to 
the real version number in your environment. If you want to access the data 
stored in cloud, you need to download corresponding jars (please refer to the 
cloud storage part) and place it in the classpath of Spark. If you want to 
enable credential vending, please set `credential-providers` to a proper value 
in the server side, set 
`spark.sql.catalog.rest.header.X-Iceberg-Access-Delegation` = 
`vended-credentials` i [...]
-
-For other storages not managed by Gravitino, the properties wouldn't transfer 
from the server to client automatically, if you want to pass custom properties 
to initialize `FileIO`, you could add it by 
`spark.sql.catalog.${iceberg_catalog_name}.${configuration_key}` = 
`{property_value}`.
-
-### Exploring Apache Iceberg with Apache Spark SQL
-
-```sql
-// First change to use the `rest` catalog
-USE rest;
-CREATE DATABASE IF NOT EXISTS dml;
-CREATE TABLE dml.test (id bigint COMMENT 'unique id') using iceberg;
-DESCRIBE TABLE EXTENDED dml.test;
-INSERT INTO dml.test VALUES (1), (2);
-SELECT * FROM dml.test;
-```
-
-## Apache Flink Integration
-
-You can use Apache Flink to connect to the Gravitino Iceberg REST catalog 
service. Below is an example of how to create a catalog and access tables using 
Flink SQL:
-
-```
-CREATE CATALOG my_catalog WITH (
-  'type' = 'iceberg',
-  'catalog-type' = 'rest',
-  'uri' = 'http://localhost:9001/iceberg/',
-  'header.X-Iceberg-Access-Delegation' = 'vended-credentials',
-  'rest.auth.type' = 'basic',
-  'rest.auth.basic.username' = 'manager',
-  'rest.auth.basic.password' = 'mock'
-);
-```
-
-After creating the catalog, you can use standard Flink SQL commands to explore 
and manage your Iceberg tables:
-
-```
-USE CATALOG my_catalog;
-SHOW DATABASES;
-USE default;
-SHOW TABLES;
-CREATE TABLE `my_catalog`.`default`.`sample`  (
-     id BIGINT COMMENT 'unique id',
-     data STRING
-);
-INSERT INTO `my_catalog`.`default`.`sample` VALUES (1, 'a');
-SELECT * FROM `my_catalog`.`default`.`sample`;
-```
-
-## Exploring the Apache Gravitino Iceberg REST catalog service with Trino
-
-### Deploying Trino with Apache Iceberg support
-
-To configure the Iceberg connector, create a catalog properties file like 
`etc/catalog/rest.properties` that references the Iceberg connector.
-
-```
-connector.name=iceberg
-iceberg.catalog.type=rest
-iceberg.rest-catalog.uri=http://localhost:9001/iceberg/
-fs.hadoop.enabled=true
-```
-
-Please refer to [Trino Iceberg 
document](https://trino.io/docs/current/connector/iceberg.html) for more 
details.
-
-### Exploring Apache Iceberg with Trino SQL
-
-```sql
-USE rest.dml;
-DELETE FROM rest.dml.test WHERE id = 2;
-SELECT * FROM test;
-```
-
-## Exploring the Apache Gravitino Iceberg REST catalog service with Apache 
Doris
-
-### Creating Iceberg catalog in Apache Doris
-
-```
-CREATE CATALOG iceberg PROPERTIES (
-    "uri" = "http://localhost:9001/iceberg/";,
-    "type" = "iceberg",
-    "iceberg.catalog.type" = "rest",
-    "s3.endpoint" = "http://s3.ap-southeast-2.amazonaws.com";,
-    "s3.region" = "ap-southeast-2",
-    "s3.access_key" = "xxx",
-    "s3.secret_key" = "xxx"
-);
-```
-
-### Exploring Apache Iceberg with Apache Doris SQL
-
-```sql
-SWITCH iceberg;
-CREATE DATABASE db;
-USE db;
-CREATE TABLE t(a int);
-INSERT INTO t values(1);
-SELECT * FROM t;
-```
-
-## Exploring the Apache Gravitino Iceberg REST catalog service with StarRocks
-
-### Creating Iceberg catalog in StarRocks
-
-```
-CREATE EXTERNAL CATALOG 'iceberg'
-COMMENT "Gravitino Iceberg REST catalog on MinIO"
-PROPERTIES
-(
-  "type"="iceberg",
-  "iceberg.catalog.type"="rest",
-  "iceberg.catalog.uri"="http://iceberg-rest:9001/iceberg";,
-  "aws.s3.access_key"="admin",
-  "aws.s3.secret_key"="password",
-  "aws.s3.endpoint"="http://minio:9000";,
-  "aws.s3.enable_path_style_access"="true",
-  "client.factory"="com.starrocks.connector.iceberg.IcebergAwsClientFactory"
-);
-```
-
-Please note that, you should set `client.factory` explicitly.
-
-### Exploring Apache Iceberg with StarRocks SQL
-
-```sql
-SET CATALOG iceberg;
-CREATE DATABASE db;
-USE db;
-CREATE TABLE t(a int);
-INSERT INTO t values(1);
-SELECT * FROM t;
-```
-
-### Exploring Apache Iceberg with PyIceberg
-
-```python
-from pyiceberg.catalog import load_catalog
-
-catalog = load_catalog(
-    "my_rest_catalog", 
-    **{
-        "type": "rest",
-        "uri": "http://localhost:9001/iceberg";,
-        "header.X-Iceberg-Access-Delegation":"vended-credentials",
-        "auth": {"type": "noop"},
-    }
-)
-
-table_identifier = "db.table"
-table = catalog.load_table(table_identifier)
-print(table.scan().to_arrow())
-```
-
-### Exploring Apache Iceberg with Ray
-
-[Ray](https://www.ray.io/) is a unified framework for scaling AI and Python 
applications. Ray Data provides native support for reading and writing Iceberg 
tables through the REST catalog.
-
-:::note
-Ray Data only supports reading from and writing to existing Iceberg tables. It 
does not support DDL operations such as creating, dropping, or altering tables, 
schemas, or catalogs. You need to use other tools like Spark or PyIceberg to 
manage table metadata.
-:::
-
-#### Writing to Iceberg tables with Ray
-
-```python
-import ray
-import pandas as pd
-
-docs = [{"id": i, "data": f"Doc {i}"} for i in range(4)]
-ds = ray.data.from_pandas(pd.DataFrame(docs))
-ds.write_iceberg(
-    table_identifier="default.sample",
-    catalog_kwargs={
-        "name": "default",
-        "type": "rest",
-        "uri": "http://localhost:9001/iceberg/";,
-        "header.X-Iceberg-Access-Delegation": "vended-credentials",
-        "auth": {
-            "type": "basic",
-            "basic": {"username": "manager", "password": "mock"}
-        }
-    }
-)
-```
-
-#### Reading Iceberg tables with Ray
-
-```python
-import ray
-
-ds = ray.data.read_iceberg(
-    table_identifier="default.sample",
-    catalog_kwargs={
-        "name": "default",
-        "type": "rest",
-        "uri": "http://localhost:9001/iceberg/";,
-        "header.X-Iceberg-Access-Delegation": "vended-credentials",
-        "auth": {
-            "type": "basic",
-            "basic": {"username": "manager", "password": "mock"}
-        }
-    }
-)
-ds.show(limit=1)
-```
-
 ## Docker instructions
 
 You could run Gravitino Iceberg REST server through docker container:


Reply via email to