szehon-ho commented on code in PR #12115: URL: https://github.com/apache/iceberg/pull/12115#discussion_r1947546783
########## docs/docs/spark-procedures.md: ########## @@ -972,4 +972,100 @@ CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => Collect statistics of the snapshot with id `snap1` of table `my_table` for columns `col1` and `col2` ```sql CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => 'snap1', columns => array('col1', 'col2')); -``` \ No newline at end of file +``` + +## Table Replication + +The `rewrite-table-path` procedure prepares an Iceberg table for moving or copying to another location. + +### `rewrite-table-path` + +Stages a copy of the Iceberg table's metadata files where every absolute path source prefix is replaced by the specified target prefix. +This can be the starting point to fully or incrementally copy an Iceberg table located under an absolute path under a +source prefix to another under the target prefix. + +!!! info + This procedure only prepares metadata or/and data files for an existing Iceberg table for a copy or move to a new location. Review Comment: Nit, how about copy or move => copying or moving (to match the language in the 'Table Replication' section, and also the next sentence) ########## docs/docs/spark-procedures.md: ########## @@ -972,4 +972,100 @@ CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => Collect statistics of the snapshot with id `snap1` of table `my_table` for columns `col1` and `col2` ```sql CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => 'snap1', columns => array('col1', 'col2')); -``` \ No newline at end of file +``` + +## Table Replication + +The `rewrite-table-path` procedure prepares an Iceberg table for moving or copying to another location. Review Comment: Very small nit, let's do 'a move or copy' to match below language. Sorry didnt catch earlier. ########## docs/docs/spark-procedures.md: ########## @@ -972,4 +972,100 @@ CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => Collect statistics of the snapshot with id `snap1` of table `my_table` for columns `col1` and `col2` ```sql CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => 'snap1', columns => array('col1', 'col2')); -``` \ No newline at end of file +``` + +## Table Replication + +The `rewrite-table-path` procedure prepares an Iceberg table for moving or copying to another location. + +### `rewrite-table-path` + +Stages a copy of the Iceberg table's metadata files where every absolute path source prefix is replaced by the specified target prefix. +This can be the starting point to fully or incrementally copy an Iceberg table located under an absolute path under a +source prefix to another under the target prefix. + +!!! info + This procedure only prepares metadata or/and data files for an existing Iceberg table for a copy or move to a new location. + Copying/Moving metadata and data files to the new location is not part of this procedure. + + +| Argument Name | Required? | default | Type | Description | +|--------------------|-----------|------------------------------------------------|--------|-------------------------------------------------------------------------| +| `table` | ✔️ | | string | Name of the table | +| `source_prefix` | ✔️ | | string | The existing prefix to be replaced | +| `target_prefix` | ✔️ | | string | The replacement prefix for `source_prefix` | +| `start_version` | | first metadata.json in table's metadata log | string | The name or path to the chronologically first metadata.json to rewrite | +| `end_version` | | latest metadata.json | string | The name or path to the chronologically last metadata.json to rewrite | +| `staging_location` | | new directory under table's metadata directory | string | The output location for newly modified metadata files | + + +#### Modes of operation: + +- Full Rewrite: + +By default, the procedure operates in full rewrite mode, rewriting all reachable metadata files. This includes metadata.json, manifest lists, manifests, and position delete files. + +- Incremental Rewrite: + +If `start_version` is provided, the procedure will only rewrite metadata files created between `start_version` and `end_version`. `end_version` defaults to the latest metadata.json of the table. + +#### Output + +| Output Name | Type | Description | +|----------------------|--------|-------------------------------------------------------------------------------------| +| `latest_version` | string | Name of the latest metadata file rewritten by this procedure | +| `file_list_location` | string | Path to a file containing a listing of comma-separated source and destination paths | + +##### File List Copy Plan +The file list contains the copy plan for all files added to the table between `startVersion` and `endVersion`. +For each file, it specifies + +- Source Path: +The original file path in the table, or the staging location if the file has been rewritten. + +- Target Path: +The path with the replacement prefix. + +The following example shows a copy plan for three files: + +```csv +sourcepath/datafile1.parquet,targetpath/datafile1.parquet +sourcepath/datafile2.parquet,targetpath/datafile2.parquet +stagingpath/manifest.avro,targetpath/manifest.avro +``` + +#### Examples + +Full rewrite of a table's path from source location in HDFS to a target location in S3 bucket of table `my_table`. +This produces a new set of metadata using the s3a prefix in the default staging location under table's metadata directory. + +```sql +CALL catalog_name.system.rewrite_table_path( + table => 'db.my_table', + source_prefix => "hdfs://nn:8020/path/to/source_table", + target_prefix => "s3a://bucket/prefix/db.db/my_table" +); +``` + +Incremental rewrite of a table's path from a source location to a target location between metadata versions +`v2.metadata.json` and `v20.metadata.json`, with files written to an explicit staging location. + +```sql +CALL catalog_name.system.rewrite_table_path( + table => 'db.my_table', + source_prefix => "s3a://bucketOne/prefix/db.db/my_table", + target_prefix => "s3a://bucketTwo/prefix/db.db/my_table", + start_version => "v2.metadata.json", + end_version => "v20.metadata.json", + staging_location => "s3a://bucketStaging/my_table" +); +``` + +Once the rewrite is completed, third-party tools ( +eg. [Distcp](https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html)) can be used to copy the newly created +metadata files and data files to the target location. + +Lastly, [register_table](#register_table) procedure can be used to register copied table in the target location with catalog. + +!!! warning + Iceberg table with statistics files are not currently supported for path rewrite. Review Comment: Just merged the other change, so we can update this part :) ########## docs/docs/spark-procedures.md: ########## @@ -972,4 +972,100 @@ CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => Collect statistics of the snapshot with id `snap1` of table `my_table` for columns `col1` and `col2` ```sql CALL catalog_name.system.compute_table_stats(table => 'my_table', snapshot_id => 'snap1', columns => array('col1', 'col2')); -``` \ No newline at end of file +``` + +## Table Replication + +The `rewrite-table-path` procedure prepares an Iceberg table for moving or copying to another location. Review Comment: Very small nit, let's do 'a move or copy' to match below language. Sorry didnt catch earlier. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org