Garbage free is mostly about predictable latency. 

In my work we create trading applications that send orders to the stock 
exchange when market conditions change. If the JVM would decide to stop the 
world to collect garbage, we might not get the orders sent out in time. 

Our whole stack (messaging, persistence, business logic) is garbage free. We 
log a lot, to ensure visibility on what the application is doing, and for fast 
troubleshooting. Log4j being garbage free is essential to our success.



> On Mar 2, 2025, at 9:55, Jeff Thomas <jeff.w.tho...@outlook.com> wrote:
> 
> Hello Dev Team,
> 
> I am a totally fresh "committer" so forgive me if I ask a silly question.
> 
> I got into a discussion with Piotr about the Log4j Filters and the overloaded 
> filter methods (14 in total) and asked if the extra API is necessary when 
> there is a varargs "filter(...., final Object ... params)" method.  
> 
> He explained that this the "garbage-free" approach, and that it avoids the 
> creation of the temporary varargs object array.
> 
> I noticed however that all but one of the MessageFactory2 implementations 
> (ReusableMessageFactory) ultimately calls some varargs method anyways at the 
> tail end of the call chain.
> 
> I asked if the situation hasn't changed since Log4j 3.x targets JDK 17 and 
> there have been major improvements in garbage-collection.  Ultimately this is 
> *a lot* of boilerplate code since all are implemented in every concrete 
> Filter implementation.
> 
> Piotr suggested I ask on this mailing list what the general opinion is.
> 
> A few comments from our discussion:
> * "Garbage-free semantics were important for the financial industry 10 years 
> ago. I am not sure if they still are."
> * "the size of those methods considerably bloats log4j-api"
> * "We recently revamped the performance page: <webpage>
> Profiting from the occasion we added warnings that "async" and "garbage-free" 
> doesn't necessarily mean better performance"
> * "Garbage-free" does not mean less memory consumption. Users ask: 'I enabled 
> garbage-free mode, how come I have 200 MiB more memory usage?''"
> * "a performance specialist at Amazon) told us that memory allocation is 
> painfully slow on AWS Graviton CPUs, so garbage-free is nice there." (I found 
> this 2022 article about this: 
> https://aws.amazon.com/blogs/big-data/understanding-the-jvmmemorypressure-metric-changes-in-amazon-opensearch-service/)
> * "We all have the same question."
> 
> Thoughts?
> 
> Best regards,
> Jeff Thomas
> 
> Note: I also had a discussion with my AI tool ***which never makes 
> mistakes*** 🤐and it argued the following:
> 
> Historical Importance of Garbage-Free Semantics:
> * 10+ years ago, the financial industry and other performance-critical fields 
> (e.g., gaming, telemetry, high-frequency trading) placed significant emphasis 
> on garbage-free semantics because older garbage collectors (like CMS) 
> struggled to deal with short-lived allocations efficiently.
> * For example, creating Object[] allocations for each log event could 
> overwhelm the GC in high-throughput applications, leading to unpredictable 
> pauses.
> * The focus back then was mitigating GC pressure by avoiding temporary 
> garbage altogether in performance-critical paths.
> 
> Today:
> * Modern GCs (e.g., G1GC, ZGC, Shenandoah) have evolved significantly to 
> handle short-lived garbage extremely efficiently, as it resides in the "Young 
> Generation" and is quickly reclaimed.
> * Garbage-free semantics usually don't improve latency or throughput 
> significantly anymore unless the application is running in a very specialized 
> environment (e.g., AWS Graviton CPUs with slower allocation mechanics) or has 
> custom performance constraints.
> 
> Bottom Line:
> * The importance of garbage-free semantics has diminished in most 
> applications, especially for typical logging use cases, but specific 
> environments (e.g., AWS Graviton or highly latency-sensitive systems) may 
> still care deeply.

Reply via email to