amogh-jahagirdar commented on code in PR #9603:
URL: https://github.com/apache/iceberg/pull/9603#discussion_r1476567943


##########
spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseReader.java:
##########
@@ -279,5 +284,29 @@ protected void markRowDeleted(InternalRow row) {
         counter().increment();
       }
     }
+
+    @Override
+    protected DeleteLoader newDeleteLoader() {
+      return new CachingDeleteLoader(this::loadInputFile);
+    }
+
+    private class CachingDeleteLoader extends BaseDeleteLoader {
+      private final SparkExecutorCache cache;
+
+      CachingDeleteLoader(Function<DeleteFile, InputFile> loadInputFile) {
+        super(loadInputFile);
+        this.cache = SparkExecutorCache.getOrCreate();
+      }
+
+      @Override
+      protected boolean canCache(long size) {
+        return cache != null && size < cache.maxEntrySize();

Review Comment:
   Nit: It's a bit pedantic considering what it's for, but just realized 
shouldn't this be `size <= cache.maxEntrySize()`` instead of just < ? Based on 
the current description of the cache property 
   
   ```
     /** Returns the max entry size in bytes that will be considered for 
caching. */
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to