stevenzwu commented on code in PR #11775:
URL: https://github.com/apache/iceberg/pull/11775#discussion_r2112903188


##########
api/src/main/java/org/apache/iceberg/expressions/Literals.java:
##########
@@ -300,8 +300,7 @@ public <T> Literal<T> to(Type type) {
         case TIMESTAMP:
           return (Literal<T>) new TimestampLiteral(value());
         case TIMESTAMP_NANO:
-          // assume micros and convert to nanos to match the behavior in the 
timestamp case above
-          return new TimestampLiteral(value()).to(type);
+          return (Literal<T>) new TimestampNanoLiteral(value());

Review Comment:
   @ebyhr for the exception in your last comment with string literal, I assume 
the partition column is a hour/day type of transformation on a timestamp_ns 
column? for line 255 (`Projections` line), what is the literal for the 
`dataFilter`? Can you add a unit test to reproduce and cover that scenario?
   
   I understand it was done for compatibility with Spark. But I feel this 
change is more intuitive than before. Since long literal value can't express 
precision explicitly, it is more intuitive to assume the same precision as the 
timestamp field type. 
   
   string literal can express different precisions in the string. I am 
wondering if Spark will behave correctly if the filter value is a string 
literal with nano precision. If it works correctly, can we ask Spark users not 
to use long literal for timestamp_ns?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to