amogh-jahagirdar commented on code in PR #11775:
URL: https://github.com/apache/iceberg/pull/11775#discussion_r2113087795


##########
api/src/main/java/org/apache/iceberg/expressions/Literals.java:
##########
@@ -300,8 +300,7 @@ public <T> Literal<T> to(Type type) {
         case TIMESTAMP:
           return (Literal<T>) new TimestampLiteral(value());
         case TIMESTAMP_NANO:
-          // assume micros and convert to nanos to match the behavior in the 
timestamp case above
-          return new TimestampLiteral(value()).to(type);
+          return (Literal<T>) new TimestampNanoLiteral(value());

Review Comment:
   @stevenzwu imo I think that the fix as it is, is good as is. I'm not 
entirely sure I get why we need to keep this behavior of interpreting as 
microseconds because the case where there's a chance for a correctness issue 
about is Spark microsecond values being interpreted as nanoseconds for a 
timestamp_nano type. However, spark doesn't even support this data type yet so 
this situation doesn't seem possible yet so it feels like the right thing to do 
is to invalidate this assumption



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to