rdblue commented on code in PR #9596:
URL: https://github.com/apache/iceberg/pull/9596#discussion_r1474880493


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/SparkCatalog.java:
##########
@@ -608,6 +608,53 @@ public View createView(
         "Creating a view is not supported by catalog: " + catalogName);
   }
 
+  @Override
+  public View replaceView(
+      Identifier ident,
+      String sql,
+      String currentCatalog,
+      String[] currentNamespace,
+      StructType schema,
+      String[] queryColumnNames,
+      String[] columnAliases,
+      String[] columnComments,
+      Map<String, String> properties)
+      throws NoSuchViewException, NoSuchNamespaceException {
+    if (null != asViewCatalog) {
+      Schema icebergSchema = SparkSchemaUtil.convert(schema);
+
+      StringJoiner joiner = new StringJoiner(",");
+      Arrays.stream(queryColumnNames).forEach(joiner::add);
+
+      try {
+        Map<String, String> props =
+            ImmutableMap.<String, String>builder()
+                .putAll(Spark3Util.rebuildCreateProperties(properties))
+                .put("queryColumnNames", joiner.toString())
+                .build();
+
+        org.apache.iceberg.view.View view =
+            asViewCatalog
+                .buildView(buildIdentifier(ident))
+                .withDefaultCatalog(currentCatalog)
+                .withDefaultNamespace(Namespace.of(currentNamespace))
+                .withQuery("spark", sql)
+                .withSchema(icebergSchema)
+                .withLocation(properties.get("location"))
+                .withProperties(props)
+                .replace();
+        return new SparkView(catalogName, view);
+      } catch (org.apache.iceberg.exceptions.NoSuchNamespaceException e) {
+        throw new NoSuchNamespaceException(currentNamespace);
+      } catch (org.apache.iceberg.exceptions.NoSuchViewException e) {
+        throw new NoSuchViewException(ident);

Review Comment:
   Why don't we need to handle the exception here? Translating to the right 
Spark exception seems like something we should definitely do.
   
   I realize that we already check whether the view exists, but if the view is 
dropped concurrently, it could still fail. It's an edge case, but the right 
thing to do is to catch and issue the create, right? CREATE OR REPLACE should 
not ever fail because the view doesn't exist.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to