szehon-ho commented on code in PR #6354:
URL: https://github.com/apache/iceberg/pull/6354#discussion_r1043674439


##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/SparkReadConf.java:
##########
@@ -67,10 +67,9 @@ public boolean caseSensitive() {
   }
 
   public boolean localityEnabled() {
-    InputFile file = table.io().newInputFile(table.location());
-
-    if (file instanceof HadoopInputFile) {
-      String scheme = ((HadoopInputFile) file).getFileSystem().getScheme();
+    if (table.io() instanceof HadoopFileIO) {
+      HadoopInputFile file = (HadoopInputFile) 
table.io().newInputFile(table.location());
+      String scheme = file.getFileSystem().getScheme();
       boolean defaultValue = LOCALITY_WHITELIST_FS.contains(scheme);
       return PropertyUtil.propertyAsBoolean(readOptions, 
SparkReadOptions.LOCALITY, defaultValue);
     }

Review Comment:
   Yea it's done that way in Flink SourceUtil.  I guess there they have to 
catch the declared IOException as they instantiate a Hadoop Path, but here we 
do not.  Not sure if there's that much value to change to the Flink's way.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to