aokolnychyi commented on code in PR #9563:
URL: https://github.com/apache/iceberg/pull/9563#discussion_r1476899638
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/SparkReadConf.java:
##########
@@ -331,4 +331,24 @@ private long driverMaxResultSize() {
SparkConf sparkConf = spark.sparkContext().conf();
return sparkConf.getSizeAsBytes(DRIVER_MAX_RESULT_SIZE,
DRIVER_MAX_RESULT_SIZE_DEFAULT);
}
+
+ public boolean executorCacheLocalityEnabled() {
+ return executorCacheEnabled() && executorCacheLocalityEnabledInternal();
+ }
+
+ private boolean executorCacheEnabled() {
+ return confParser
+ .booleanConf()
+ .sessionConf(SparkSQLProperties.EXECUTOR_CACHE_ENABLED)
+ .defaultValue(SparkSQLProperties.EXECUTOR_CACHE_ENABLED_DEFAULT)
+ .parse();
+ }
+
+ private boolean executorCacheLocalityEnabledInternal() {
Review Comment:
My worry is that we don't really know if enabling this with dynamic
allocation is going to hurt. For instance, it still may make sense if the min
number of executors is big enough or if the cluster is hot. Given that we would
also have to add logic to parse the dynamic allocation config, I'd probably not
log it and trust the person setting this for now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]