andormarkus commented on issue #2325:
URL: 
https://github.com/apache/iceberg-python/issues/2325#issuecomment-3625295829

   Hi @thomas-pfeiffer,
   
   Thanks for sharing your workaround! I'm curious about your specific scenario 
that led you to disable the cache completely.
   
   In my case, I implemented cache clearing at invocation boundaries:
   1. **Init step** - Clear cache at the beginning of Lambda execution
   2. **Post-execution step** - Clear cache after completing the operation
   
   This approach gives me:
   - **During execution**: Full benefit of the cache (faster performance)
   - **Between invocations**: Memory growth is limited/bounded
   
   With this strategy, my Lambda memory stabilizes at 1600-1800MB instead of 
continuously growing to OOM, while still maintaining cache performance benefits 
within each execution. I'm running with 2048MB memory allocation, so this 
10-20% overhead is acceptable in our environment.
   
   Could you share more details about your scenario? 
   - What led you to disable the cache entirely rather than clearing at 
invocation boundaries?
   - Did you try clearing at execution start/end and find it insufficient?
   - Are your Lambda executions particularly long-running or processing very 
large manifest lists?
   - What memory allocation are you working with?
   
   Understanding your use case would help the community find the right balance 
between performance and memory stability.
   
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to