kosiew commented on code in PR #20047: URL: https://github.com/apache/datafusion/pull/20047#discussion_r3158625956
########## datafusion/execution/src/cache/file_statistics_cache.rs: ########## @@ -0,0 +1,745 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +use crate::cache::cache_manager::{ + CachedFileMetadata, FileStatisticsCache, FileStatisticsCacheEntry, +}; +use crate::cache::{CacheAccessor, TableScopedPath}; +use std::collections::HashMap; +use std::sync::Mutex; + +pub use crate::cache::DefaultFilesMetadataCache; +use crate::cache::lru_queue::LruQueue; +use datafusion_common::TableReference; +use datafusion_common::heap_size::{DFHeapSize, DFHeapSizeCtx}; + +/// Default implementation of [`FileStatisticsCache`] +/// +/// Stores cached file metadata (statistics and orderings) for files. +/// +/// The typical usage pattern is: +/// 1. Call `get(path)` to check for cached value +/// 2. If `Some(cached)`, validate with `cached.is_valid_for(¤t_meta)` +/// 3. If invalid or missing, compute new value and call `put(path, new_value)` +/// +/// # Internal details +/// +/// The `memory_limit` controls the maximum size of the cache, which uses a +/// Least Recently Used eviction algorithm. When adding a new entry, if the total +/// size of the cached entries exceeds `memory_limit`, the least recently used entries +/// are evicted until the total size is lower than `memory_limit`. +/// +/// +/// [`FileStatisticsCache`]: crate::cache::cache_manager::FileStatisticsCache +#[derive(Default)] +pub struct DefaultFileStatisticsCache { + state: Mutex<DefaultFileStatisticsCacheState>, +} + +impl DefaultFileStatisticsCache { + pub fn new(memory_limit: usize) -> Self { + Self { + state: Mutex::new(DefaultFileStatisticsCacheState::new(memory_limit)), + } + } + + /// Returns the size of the cached memory, in bytes. + pub fn memory_used(&self) -> usize { + let state = self.state.lock().unwrap(); + state.memory_used + } +} + +struct DefaultFileStatisticsCacheState { + lru_queue: LruQueue<TableScopedPath, CachedFileMetadata>, + memory_limit: usize, + memory_used: usize, +} + +pub const DEFAULT_FILE_STATISTICS_MEMORY_LIMIT: usize = 20 * 1024 * 1024; // 20MiB + +impl Default for DefaultFileStatisticsCacheState { + fn default() -> Self { + Self { + lru_queue: LruQueue::new(), + memory_limit: DEFAULT_FILE_STATISTICS_MEMORY_LIMIT, + memory_used: 0, + } + } +} + +impl DefaultFileStatisticsCacheState { + fn new(memory_limit: usize) -> Self { + Self { + lru_queue: LruQueue::new(), + memory_limit, + memory_used: 0, + } + } + fn get(&mut self, key: &TableScopedPath) -> Option<CachedFileMetadata> { + self.lru_queue.get(key).cloned() + } + + fn put( + &mut self, + key: &TableScopedPath, + value: CachedFileMetadata, + ) -> Option<CachedFileMetadata> { + let mut ctx = DFHeapSizeCtx::default(); Review Comment: This logic is correct today, but it feels a bit fragile. Right now it works because `TableScopedPath::heap_size` does not interact with the context, so recomputing `key.heap_size` with a fresh `DFHeapSizeCtx` gives the same result. However, if the key ever includes something like an `Arc` that *does* register in the context, the second computation could return 0 and silently skew `memory_used`. It might be safer to reuse the already computed `key_size` instead of recomputing it with a different context. That would make the intent clearer and avoid future footguns. ########## datafusion/execution/src/cache/cache_manager.rs: ########## Review Comment: This comment says "Default is disabled", but in practice the cache is created when `list_files_cache_limit > 0` (which it is by default). So effectively this is enabled by default, similar to the earlier issue we discussed. Might be worth tweaking the comment to reflect the actual behavior so it does not surprise future readers. ########## datafusion/common/src/heap_size.rs: ########## @@ -0,0 +1,551 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +use crate::stats::Precision; +use crate::{ColumnStatistics, ScalarValue, Statistics, TableReference}; +use arrow::array::{ + Array, FixedSizeListArray, LargeListArray, LargeListViewArray, ListArray, + ListViewArray, MapArray, StructArray, +}; +use arrow::datatypes::{ + DataType, Field, Fields, IntervalDayTime, IntervalMonthDayNano, IntervalUnit, + TimeUnit, UnionFields, UnionMode, i256, +}; +use chrono::{DateTime, Utc}; +use half::f16; +use hashbrown::HashSet; +use std::collections::HashMap; +use std::fmt::Debug; +use std::sync::Arc; + +/// This is a temporary solution until <https://github.com/apache/datafusion/pull/19599> and +/// <https://github.com/apache/arrow-rs/pull/9138> are resolved. +/// Trait for calculating the size of various containers +pub trait DFHeapSize { + /// Return the size of any bytes allocated on the heap by this object, + /// including heap memory in those structures + /// + /// Note that the size of the type itself is not included in the result -- + /// instead, that size is added by the caller (e.g. container). + fn heap_size(&self, ctx: &mut DFHeapSizeCtx) -> usize; +} + +#[derive(Default)] +pub struct DFHeapSizeCtx { + seen: HashSet<usize>, +} + +impl DFHeapSize for Statistics { + fn heap_size(&self, ctx: &mut DFHeapSizeCtx) -> usize { + self.num_rows.heap_size(ctx) + + self.total_byte_size.heap_size(ctx) + + self.column_statistics.heap_size(ctx) + } +} + +impl DFHeapSize for TableReference { + fn heap_size(&self, ctx: &mut DFHeapSizeCtx) -> usize { + self.table().heap_size(ctx) Review Comment: `TableReference::heap_size` currently counts the string length, even though it internally uses `Arc<str>`. This slightly undercounts the real memory usage (missing the Arc overhead and possible sharing), but it is consistent with the approximate approach used elsewhere. Might be worth leaving a small TODO or comment here so future readers know this is intentional. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
