huaxingao commented on code in PR #11551:
URL: https://github.com/apache/iceberg/pull/11551#discussion_r1864760757


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java:
##########
@@ -125,4 +126,25 @@ protected VectorizedReader<?> 
vectorizedReader(List<VectorizedReader<?>> reorder
       return reader;
     }
   }
+
+  private static int numOfExtraColumns(DeleteFilter deleteFilter) {

Review Comment:
   Fixed. Thanks



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java:
##########
@@ -125,4 +126,25 @@ protected VectorizedReader<?> 
vectorizedReader(List<VectorizedReader<?>> reorder
       return reader;
     }
   }
+
+  private static int numOfExtraColumns(DeleteFilter deleteFilter) {
+    if (deleteFilter != null) {
+      if (deleteFilter.hasEqDeletes()) {
+        // For Equality Delete, the requiredColumns and expectedColumns may 
not be the
+        // same. For example, supposed table schema is C1, C2, C3, C4, C5, The 
query is:
+        // SELECT C5 FROM table, and the equality delete Filter is on C3, C4, 
then
+        // the requestedSchema is C5, and the required schema is C5, C3 and 
C4. The
+        // vectorized reader reads also need to read C3 and C4 columns to 
figure out
+        // which rows are deleted. However, after figuring out the deleted 
rows, the
+        // extra columns values are not needed to returned to Spark.
+        // Getting the numOfExtraColumns so we can remove these extra columns
+        // from ColumnBatch later.

Review Comment:
   Moved. Thanks



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java:
##########
@@ -125,4 +126,25 @@ protected VectorizedReader<?> 
vectorizedReader(List<VectorizedReader<?>> reorder
       return reader;
     }
   }
+
+  private static int numOfExtraColumns(DeleteFilter deleteFilter) {
+    if (deleteFilter != null) {
+      if (deleteFilter.hasEqDeletes()) {
+        // For Equality Delete, the requiredColumns and expectedColumns may 
not be the
+        // same. For example, supposed table schema is C1, C2, C3, C4, C5, The 
query is:
+        // SELECT C5 FROM table, and the equality delete Filter is on C3, C4, 
then
+        // the requestedSchema is C5, and the required schema is C5, C3 and 
C4. The
+        // vectorized reader reads also need to read C3 and C4 columns to 
figure out
+        // which rows are deleted. However, after figuring out the deleted 
rows, the
+        // extra columns values are not needed to returned to Spark.
+        // Getting the numOfExtraColumns so we can remove these extra columns
+        // from ColumnBatch later.
+        List<Types.NestedField> requiredColumns = 
deleteFilter.requiredSchema().columns();
+        List<Types.NestedField> expectedColumns = 
deleteFilter.requestedSchema().columns();
+        return requiredColumns.size() - expectedColumns.size();

Review Comment:
   Yes, because the extra columns are appended to the end of `requestedSchema` 
in 
[DeleteFilter.fileProjection](https://github.com/apache/iceberg/blob/06dc721498d6ad95c86f0f884b8ad30f807ef321/data/src/main/java/org/apache/iceberg/data/DeleteFilter.java#L306)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to