rdblue commented on code in PR #9008:
URL: https://github.com/apache/iceberg/pull/9008#discussion_r1697460829


##########
api/src/main/java/org/apache/iceberg/expressions/Literals.java:
##########
@@ -299,6 +300,8 @@ public <T> Literal<T> to(Type type) {
           return (Literal<T>) new TimeLiteral(value());
         case TIMESTAMP:
           return (Literal<T>) new TimestampLiteral(value());
+        case TIMESTAMP_NANO:
+          return (Literal<T>) new TimestampNanoLiteral(value());

Review Comment:
   I don't think that we want to implement this conversion because it is likely 
to cause timestamp granularity bugs. That, or we should first convert to 
`timestamp` and then to `timestamp_ns`.
   
   Originally, we did not allow conversion to `long` because the unit of the 
value was not known. When I implemented Spark filter pushdown, I added the 
conversion to `timestamp` because Spark internally uses the same microsecond 
representation. That set a precedent that longs are converted to timestamp 
using microseconds.
   
   Adding this conversion would introduce a bug when pushing filters from Spark 
because the Spark value in micros would be interpreted as nanos for a nano 
column. Instead, this should use `TimestampLiteral(value()).to(type)`. I'd also 
be fine with not implementing this at all, but then I think Spark pushdown 
wouldn't work for nano columns, which is probably not what we want.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to