pan3793 commented on code in PR #11480:
URL: https://github.com/apache/iceberg/pull/11480#discussion_r1833624554


##########
spark/v3.5/spark-extensions/src/main/scala/org/apache/spark/sql/catalyst/parser/extensions/IcebergSparkSqlExtensionsParser.scala:
##########
@@ -151,6 +155,11 @@ class IcebergSparkSqlExtensionsParser(delegate: 
ParserInterface) extends ParserI
             isSnapshotRefDdl(normalized)))
   }
 
+  private def isIcebergProcedure(normalized: String): Boolean = {
+    // All builtin Iceberg procedures are under the 'system' namespace
+    SparkProcedures.names().asScala.map("system." + 
_).exists(normalized.contains)

Review Comment:
   > Do we have to do any casing here? Or are we always guaranteed to get lower 
casing?
   
   @RussellSpitzer the method accepts a "normalized" SQL here.
   
   From the context, I think the idea here might be: use a simple rule to 
normalize the SQL and check if it statisfy the Iceberg syntax, 1) if yes, use 
Iceberg's parser to parse the **original**(not the normalized one, so all other 
things, like hint, comments, are preserved) SQL; 2) if not, leave it to the 
delegate parser
   
   > Maybe this should live within SparkProcedures? 
SparkProcedures.isIcebergProcedure(string)
   
   Given it only accepts a "normalized" SQL, I would prefer to leave it here, 
alongside with existing `isSnapshotRefDdl`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to