jacobmarble commented on code in PR #11775:
URL: https://github.com/apache/iceberg/pull/11775#discussion_r1951255807


##########
api/src/main/java/org/apache/iceberg/expressions/Literals.java:
##########
@@ -300,8 +300,7 @@ public <T> Literal<T> to(Type type) {
         case TIMESTAMP:
           return (Literal<T>) new TimestampLiteral(value());
         case TIMESTAMP_NANO:
-          // assume micros and convert to nanos to match the behavior in the 
timestamp case above
-          return new TimestampLiteral(value()).to(type);
+          return (Literal<T>) new TimestampNanoLiteral(value());

Review Comment:
   Please forgive my delayed participation here. I'm not familiar with either 
Spark or Trino, but @epgif and I did author the nanoseconds PR.
   
   > Originally, we did not allow conversion to long because the unit of the 
value was not known. When I implemented Spark filter pushdown, I added the 
conversion to timestamp because Spark internally uses the same microsecond 
representation. That set a precedent that longs are converted to timestamp 
using microseconds.
   
   ^^ @rdblue's comment 
https://github.com/apache/iceberg/pull/9008#discussion_r1697460829 is where we 
switched away from `new TimestampNanoLiteral(value())`.
   
   > we have the assumption that every long we see, is in microseconds
   
   This is the core of the problem, right?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to