Prajwal-banakar opened a new pull request, #2907:
URL: https://github.com/apache/fluss/pull/2907

   
   
   <!--
   *Thank you very much for contributing to Fluss - we are happy that you want 
to help us improve Fluss. To help the community review your contribution in the 
best possible way, please go through the checklist below, which will get the 
contribution into a shape in which it can be best reviewed.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [GitHub 
issue](https://github.com/apache/fluss/issues). Exceptions are made for typos 
in JavaDoc or documentation files, which need no issue.
   
     - Name the pull request in the format "[component] Title of the pull 
request", where *[component]* should be replaced by the name of the component 
being changed. Typically, this corresponds to the component label assigned to 
the issue (e.g., [kv], [log], [client], [flink]). Skip *[component]* if you are 
unsure about which is the best component.
   
     - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
   
     - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes.
   
     - Each pull request should address only one issue, not mix up code from 
multiple issues.
   
   
   **(The sections below can be removed for hotfixes or typos)**
   -->
   
   ### Purpose
   
   <!-- Linking this pull request to the issue -->
   Linked issue: close #2874
   
   <!-- What is the purpose of the change -->
   When `lookup.insert-if-not-exists=true` and `lookup.async=true`, multiple 
lookups for the same key can land in the same `LookupBatch` within a single 
drain cycle. The server acquires a row-level latch per key during insert — 
   two entries for the same key in one RPC batch cause a deadlock in the 
server-side handler, leaving `LookupQuery` futures never completed and hanging 
the `AsyncWaitOperator` indefinitely, resulting in a 60s timeout.
   
   This did not affect `async=false` because sync lookups execute sequentially 
via `.get()`, so the same key never appears twice in one drain cycle.
   
   ### Brief change log
   
   <!-- Please describe the changes made in this pull request and explain how 
they address the issue -->
   - In `LookupSender.sendLookupRequest`, when `insertIfNotExists=true`, 
duplicate lookups for the same key bytes within the same batch are detected 
using `ByteBuffer` key comparison. The duplicate's future is chained to the 
first lookup's future instead of being added to the RPC batch, ensuring  only 
one server-side entry is sent per unique key per drain cycle.
   ### Tests
   
   <!-- List UT and IT cases to verify this change -->
   - `Flink119TableSourceITCase#testLookupInsertIfNotExists` (existing test, 
previously flaky with `async=true`, now passes consistently with  
`-Dsurefire.rerunFailingTestsCount=5`)
   ### API and Format
   
   <!-- Does this change affect API or storage format -->
   No API or storage format changes.
   ### Documentation
   
   <!-- Does this change introduce a new feature -->
   No new feature introduced. Bug fix only.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to