ayushtkn commented on code in PR #6408:
URL: https://github.com/apache/hive/pull/6408#discussion_r3052604643


##########
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/VectorizedPrimitiveColumnReader.java:
##########
@@ -64,6 +64,8 @@ public void readBatch(
       int total,
       ColumnVector column,
       TypeInfo columnType) throws IOException {
+    this.currentDefLevels = new int[total];
+    this.defLevelIndex = 0;

Review Comment:
   Because I removed the old, flawed logic that incorrectly ANDed the child 
isNull flags (which was the root cause of the bug for structs with all null 
fields), the struct reader needs a reliable way to know when the struct itself 
is actually null. Fetching the Definition Levels from the primitive reader 
isn't a separate fix; it is the replacement mechanism for the deleted logic. It 
is the only correct way in Parquet to distinguish between an explicitly NULL 
struct and a valid struct containing null fields.
   
   If you run the test & remove these two lines, the 2nd insert NULL won't 
evaluate correctly 
   
   If we don't initialize that array, the primitive reader skips recording the 
D-levels, and getDefinitionLevels() returns null. The struct reader then 
bypasses the D-level evaluation block entirely. Because the old flawed AND 
logic is gone, and the new D-level logic is bypassed, the struct vector 
defaults to isNull = false. This causes a genuinely NULL struct (Row 2) to 
incorrectly evaluate as an existing struct with null fields: 
{"x":null,"y":null}."



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to