dramaticlly commented on code in PR #12115:
URL: https://github.com/apache/iceberg/pull/12115#discussion_r1931319129
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,86 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+### `rewrite-table-path`
+
+This procedure rewrites an iceberg table's metadata files to a new location by
replacing all source prefixes in absolute paths
+with a specified target prefix. After copying both metadata and data to the
desired location, the replicated iceberg
+table will appear identical to the source table, including snapshot history,
schema and partition specs.
+
+!!! info
+this procedure serves as the starting point to fully or incrementally copying
an iceberg table to a new location. Copying all
+metadata and data files from source to target location is not included as part
of this procedure.
+
+| Argument Name | Required? | Type | Description
|
+|--------------------|-----------|--------|------------------------------------------------------------------------------------------------------------|
+| `table` | ✔️ | string | Name of the table
|
+| `source_prefix` | ✔️ | string | source prefix to be replaced
|
+| `target_prefix` | ✔️ | string | target prefix
|
+| `start_version` | | string | First metadata version to rewrite,
identified by name of a metadata.json file in the table's metadata log. |
+| `end_version` | | string | Last metadata version to rewrite,
identified by name of a metadata.json file in the table's metadata log. |
+| `staging_location` | | string | Custom staging location
|
+
+#### Output
+
+| Output Name | Type | Description
|
+|----------------------|--------|---------------------------------------------------------------------------------|
+| `latest_version` | string | Name of latest metadata file version
|
+| `file_list_location` | string | Path to a file containing a listing of
comma-separated paths ready to be copied |
+
+Example file list content :
+
+```csv
+sourcepath/datafile1.parquet,targetpath/datafile1.parquet
+sourcepath/datafile2.parquet,targetpath/datafile2.parquet
+stagingpath/manifest.avro,targetpath/manifest.avro
+```
+
+#### Examples
+
+Full rewrite of a table's metadata path from source location in HDFS to a
target location in S3 bucket of table `my_table`
Review Comment:
Added mode of operation as suggested!
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,86 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+### `rewrite-table-path`
+
+This procedure rewrites an iceberg table's metadata files to a new location by
replacing all source prefixes in absolute paths
+with a specified target prefix. After copying both metadata and data to the
desired location, the replicated iceberg
+table will appear identical to the source table, including snapshot history,
schema and partition specs.
+
+!!! info
+this procedure serves as the starting point to fully or incrementally copying
an iceberg table to a new location. Copying all
+metadata and data files from source to target location is not included as part
of this procedure.
+
+| Argument Name | Required? | Type | Description
|
+|--------------------|-----------|--------|------------------------------------------------------------------------------------------------------------|
+| `table` | ✔️ | string | Name of the table
|
+| `source_prefix` | ✔️ | string | source prefix to be replaced
|
+| `target_prefix` | ✔️ | string | target prefix
|
+| `start_version` | | string | First metadata version to rewrite,
identified by name of a metadata.json file in the table's metadata log. |
Review Comment:
Thanks @RussellSpitzer This can be either the full path to metadata.json
file or [just the file name]
(https://github.com/apache/iceberg/blob/main/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/actions/TestRewriteTablePathsAction.java#L220).
##########
docs/docs/spark-procedures.md:
##########
@@ -972,4 +972,86 @@ CALL catalog_name.system.compute_table_stats(table =>
'my_table', snapshot_id =>
Collect statistics of the snapshot with id `snap1` of table `my_table` for
columns `col1` and `col2`
```sql
CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id
=> 'snap1', columns => array('col1', 'col2'));
-```
\ No newline at end of file
+```
+
+## Table Replication
+
+### `rewrite-table-path`
+
+This procedure rewrites an iceberg table's metadata files to a new location by
replacing all source prefixes in absolute paths
+with a specified target prefix. After copying both metadata and data to the
desired location, the replicated iceberg
+table will appear identical to the source table, including snapshot history,
schema and partition specs.
+
+!!! info
+this procedure serves as the starting point to fully or incrementally copying
an iceberg table to a new location. Copying all
+metadata and data files from source to target location is not included as part
of this procedure.
+
+| Argument Name | Required? | Type | Description
|
+|--------------------|-----------|--------|------------------------------------------------------------------------------------------------------------|
+| `table` | ✔️ | string | Name of the table
|
+| `source_prefix` | ✔️ | string | source prefix to be replaced
|
+| `target_prefix` | ✔️ | string | target prefix
|
+| `start_version` | | string | First metadata version to rewrite,
identified by name of a metadata.json file in the table's metadata log. |
+| `end_version` | | string | Last metadata version to rewrite,
identified by name of a metadata.json file in the table's metadata log. |
+| `staging_location` | | string | Custom staging location
|
+
+#### Output
+
+| Output Name | Type | Description
|
+|----------------------|--------|---------------------------------------------------------------------------------|
+| `latest_version` | string | Name of latest metadata file version
|
+| `file_list_location` | string | Path to a file containing a listing of
comma-separated paths ready to be copied |
+
+Example file list content :
+
+```csv
+sourcepath/datafile1.parquet,targetpath/datafile1.parquet
+sourcepath/datafile2.parquet,targetpath/datafile2.parquet
+stagingpath/manifest.avro,targetpath/manifest.avro
+```
+
+#### Examples
+
+Full rewrite of a table's metadata path from source location in HDFS to a
target location in S3 bucket of table `my_table`
+
+```sql
+CALL catalog_name.system.rewrite_table_path(
+ table => 'db.my_table',
+ source_prefix => "hdfs://nn:8020/path/to/source_table",
+ target_prefix => "s3a://bucket/prefix/db.db/my_table"
+);
+```
+
+Incremental rewrite of a table's metadata path from a source location to a
target location between metadata version
+`v2.metadata.json` and `v3.metadata.json`, with files written to a staging
location
+
+```sql
+CALL catalog_name.system.rewrite_table_path(
+ table => 'db.my_table',
+ source_prefix => "s3a://bucketOne/prefix/db.db/my_table",
+ target_prefix => "s3a://bucketTwo/prefix/db.db/my_table",
+ start_version => "v2.metadata.json",
+ end_version => "v3.metadata.json",
+ staging_location => "s3a://bucketStaging/my_table"
+);
+```
+
+Once the rewrite is completed, third-party tools (
+eg.
[Distcp](https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html)) can
be used to copy the newly created
+metadata files and data files to the target location. Here is an example of
reading result from file list location in
+Spark.
+
+```java
+List<String> filesToMove =
Review Comment:
I borrowed from old documentaton and think it might be helpful to share on
how result can be used, but I can remove this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]