dimas-b commented on code in PR #3723:
URL: https://github.com/apache/polaris/pull/3723#discussion_r2789296872


##########
site/content/in-dev/unreleased/polaris-spark-client.md:
##########
@@ -123,12 +123,27 @@ build a Spark client jar locally from source. Please 
check out the Polaris repo
 The following describes the current limitations of the Polaris Spark client:
 
 ### General Limitations
-1. The Polaris Spark client only supports Iceberg and Delta tables. It does 
not support other table formats like CSV, JSON, etc.
+1. The Polaris Spark client supports Iceberg, Delta, Hudi, and Paimon tables. 
Other table formats like CSV, JSON, etc. are not supported for table operations 
through Spark.
 2. Generic tables (non-Iceberg tables) APIs do not currently support 
credential vending.
 
 ### Delta Table Limitations
 1. Create table as select (CTAS) is not supported for Delta tables. As a 
result, the `saveAsTable` method of `Dataframe`
    is also not supported, since it relies on the CTAS support.
 2. Create a Delta table without explicit location is not supported.
 3. Rename a Delta table is not supported.
-4. ALTER TABLE ... SET LOCATION is not supported for DELTA table.
\ No newline at end of file
+4. ALTER TABLE ... SET LOCATION is not supported for DELTA table.
+
+### Paimon Table Limitations
+1. Paimon support requires the Paimon Spark connector to be included in the 
classpath.

Review Comment:
   Could you add an example for that?



##########
plugins/spark/v3.5/spark/src/main/java/org/apache/polaris/spark/PolarisSparkCatalog.java:
##########
@@ -74,6 +74,9 @@ public Table loadTable(Identifier identifier) throws 
NoSuchTableException {
       // Currently Hudi supports Spark Datasource V1, therefore we return a 
V1Table
       if (PolarisCatalogUtils.useHudi(genericTable.getFormat())) {
         return PolarisCatalogUtils.loadV1SparkTable(genericTable, identifier, 
name());
+      } else if (PolarisCatalogUtils.usePaimon(genericTable.getFormat())) {
+        // Paimon uses DataSourceV2, so we return a V2Table
+        return PolarisCatalogUtils.loadV2SparkTable(genericTable);

Review Comment:
   Is this really necessary? The `else` block below does exactly the same thing 
by default 🤔 



##########
plugins/spark/v3.5/spark/src/main/java/org/apache/polaris/spark/SparkCatalog.java:
##########
@@ -167,6 +170,11 @@ public Table createTable(
         // to create the .hoodie folder in cloud storage
         TableCatalog hudiCatalog = 
hudiHelper.loadHudiCatalog(this.polarisSparkCatalog);
         return hudiCatalog.createTable(ident, schema, transforms, properties);
+      } else if (PolarisCatalogUtils.usePaimon(provider)) {

Review Comment:
   suggestion: why not put the `usePaimon` method into `PaimonHelper`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to