daguimu opened a new pull request, #16170:
URL: https://github.com/apache/dubbo/pull/16170

   ## Problem
   
   `LRU2Cache.computeIfAbsent()` releases the lock between `get()` and `put()`, 
breaking the atomicity that **all other methods** in this class maintain. Every 
other method (`get`, `put`, `remove`, `size`, `clear`, `containsKey`, 
`setMaxCapacity`) correctly holds the lock for the entire operation, but 
`computeIfAbsent` does not:
   
   ```java
   // Current broken code - lock released between get() and put()
   public V computeIfAbsent(K key, Function<? super K, ? extends V> fn) {
       V value = get(key);       // acquires lock, releases lock
       if (value == null) {
           value = fn.apply(key); // no lock held
           put(key, value);       // acquires lock, releases lock
       }
       return value;
   }
   ```
   
   This is called from the **Triple protocol hot path** in 
`StreamUtils.lruHeaderMap.computeIfAbsent(key, k -> k.toLowerCase())` for every 
RPC request.
   
   ### Impact
   
   Under concurrency, multiple threads can:
   1. Both see `get(key) == null`
   2. Both compute `fn.apply(key)` (duplicate work)
   3. Both call `put(key, value)` — the double-put defeats the LRU-2 promotion 
logic (first put goes to pre-cache, second put unintentionally promotes to main 
cache)
   
   ## Root Cause
   
   The method was composed from the public `get()` and `put()` methods (which 
each independently acquire/release the lock) instead of operating on the 
underlying `LinkedHashMap` directly under a single lock span.
   
   ## Fix
   
   Hold the lock across the entire `computeIfAbsent` operation, using 
`super.get()` / `super.put()` directly and inlining the LRU-2 promotion logic 
to avoid nested lock acquisition:
   
   ```java
   public V computeIfAbsent(K key, Function<? super K, ? extends V> fn) {
       lock.lock();
       try {
           V value = super.get(key);
           if (value == null) {
               value = fn.apply(key);
               if (preCache.containsKey(key)) {
                   preCache.remove(key);
                   super.put(key, value);
               } else {
                   preCache.put(key, true);
               }
           }
           return value;
       } finally {
           lock.unlock();
       }
   }
   ```
   
   ## Tests Added
   
   - `testComputeIfAbsentBasic` — Verifies correct LRU-2 semantics: first call 
goes to pre-cache, second promotes, third returns from main cache without 
calling the function
   - `testComputeIfAbsentConcurrency` — 10 threads × 1000 iterations calling 
`computeIfAbsent` concurrently, verifying no errors or corruption
   
   All 6 tests in `LRU2CacheTest` pass.
   
   ## Impact
   
   - Minimal change: 1 method rewritten in 1 file
   - Preserves existing LRU-2 semantics
   - Fixes thread safety in Triple protocol header processing hot path


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to