brandboat commented on code in PR #19803:
URL: https://github.com/apache/kafka/pull/19803#discussion_r2242505112


##########
metadata/src/main/java/org/apache/kafka/metadata/MetadataCache.java:
##########
@@ -148,56 +147,53 @@ DescribeTopicPartitionsResponseData describeTopicResponse(
         boolean ignoreTopicsWithExceptions);
 
     static Cluster toCluster(String clusterId, MetadataImage image) {
-        Map<Integer, List<Node>> brokerToNodes = new HashMap<>();
-        image.cluster().brokers().values().stream()
-            .filter(broker -> !broker.fenced())
-            .forEach(broker -> brokerToNodes.put(broker.id(), broker.nodes()));
+        Map<Integer, Node> nodesById = 
image.cluster().brokers().values().stream()

Review Comment:
   Hi @ahuang98, sorry for the late reply.
   
   The Map<Integer, List> holds endpoint information per node. If a node has 
multiple endpoints, we might send duplicated data to users. For example, if a 
broker has 2 endpoints and 1 topic-partition, partitionInfos will contain 2 
entries—one per endpoint.
   
   That’s why we filter them out here. But as you mentioned, this also causes 
loss of information. The root issue is that the current `Cluster` placeholder 
is no longer suitable. This API existed in ZooKeeper mode, but not in KRaft. We 
reintroduced it in Kafka 4.0, but it came with drawbacks.
   
   In ZooKeeper, we generate the `Cluster` object based on the listener used in 
the request, so the returned metadata is listener-aware. But in KRaft, that 
logic doesn't exist—the current implementation doesn’t distinguish which 
listener the request came from. This makes the `updateClusterMetadata` API 
inconsistent and in need of redesign, and I proposed a KIP to address this: 
https://cwiki.apache.org/confluence/x/zInoF
   
   However, recent discussions suggest this method isn't actively used:
   https://lists.apache.org/thread/otc7d6z19h5d86o17qr72g23wkxffkvo
   That aligns with the fact that no one complained about its absence in KRaft. 
Given that, it may not be worth investing more effort into fixing it.
   
   To summarize: we plan to deprecate `updateClusterMetadata` and remove it in 
Kafka 5.0.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to