chia7712 commented on code in PR #17013:
URL: https://github.com/apache/kafka/pull/17013#discussion_r1734312106
##########
core/src/main/scala/kafka/server/KafkaApis.scala:
##########
@@ -3808,8 +3808,14 @@ class KafkaApis(val requestChannel: RequestChannel,
)
}
+ private def groupVersion(): GroupVersion = {
+
GroupVersion.fromFeatureLevel(metadataCache.features.finalizedFeatures.getOrDefault(GroupVersion.FEATURE_NAME,
0.toShort))
+ }
+
private def isConsumerGroupProtocolEnabled(): Boolean = {
- groupCoordinator.isNewGroupCoordinator &&
config.groupCoordinatorRebalanceProtocols.contains(Group.GroupType.CONSUMER)
+ groupCoordinator.isNewGroupCoordinator &&
Review Comment:
there are 3 conditions for consumer group protocol.
1. `groupCoordinator.isNewGroupCoordinator` this will be removed while old
group coordinator gets dropped
2.
`config.groupCoordinatorRebalanceProtocols.contains(Group.GroupType.CONSUMER)`
this is a static config
3. `groupVersion().isConsumerRebalanceProtocolSupported` this is dynamic (by
updating feature)
not sure why we need both condition_2 and condition_3 in controlling the
support of consumer group requests. Or `groupCoordinatorRebalanceProtocols`
will be removed in the future too?
##########
core/src/test/java/kafka/test/junit/RaftClusterInvocationContext.java:
##########
@@ -246,21 +247,48 @@ public Map<Integer, ControllerServer> controllers() {
public void format() throws Exception {
if (formated.compareAndSet(false, true)) {
- List<ApiMessageAndVersion> records = new ArrayList<>();
- records.add(
- new ApiMessageAndVersion(new FeatureLevelRecord().
- setName(MetadataVersion.FEATURE_NAME).
-
setFeatureLevel(clusterConfig.metadataVersion().featureLevel()), (short) 0));
-
- clusterConfig.features().forEach((feature, version) -> {
- records.add(
- new ApiMessageAndVersion(new FeatureLevelRecord().
- setName(feature.featureName()).
- setFeatureLevel(version), (short) 0));
+ Map<String, Features> nameToSupportedFeature = new TreeMap<>();
+ Features.PRODUCTION_FEATURES.forEach(feature ->
nameToSupportedFeature.put(feature.featureName(), feature));
+ Map<String, Short> newFeatureLevels = new TreeMap<>();
+
+ // Verify that all specified features are known to us.
+ for (Map.Entry<Features, Short> entry :
clusterConfig.features().entrySet()) {
+ String featureName = entry.getKey().featureName();
+ short level = entry.getValue();
+ if (!featureName.equals(MetadataVersion.FEATURE_NAME)) {
+ if (!nameToSupportedFeature.containsKey(featureName)) {
+ throw new FormatterException("Unsupported feature:
" + featureName +
Review Comment:
Could you please use `String.join` instead?
```java
throw new FormatterException("Unsupported
feature: " + featureName +
". Supported features are: " +
String.join(", ", nameToSupportedFeature.keySet()));
```
##########
core/src/test/java/kafka/test/junit/RaftClusterInvocationContext.java:
##########
@@ -246,21 +247,48 @@ public Map<Integer, ControllerServer> controllers() {
public void format() throws Exception {
if (formated.compareAndSet(false, true)) {
- List<ApiMessageAndVersion> records = new ArrayList<>();
- records.add(
- new ApiMessageAndVersion(new FeatureLevelRecord().
- setName(MetadataVersion.FEATURE_NAME).
-
setFeatureLevel(clusterConfig.metadataVersion().featureLevel()), (short) 0));
-
- clusterConfig.features().forEach((feature, version) -> {
- records.add(
- new ApiMessageAndVersion(new FeatureLevelRecord().
- setName(feature.featureName()).
- setFeatureLevel(version), (short) 0));
+ Map<String, Features> nameToSupportedFeature = new TreeMap<>();
Review Comment:
there is another PR (#16991) adding similar behavior, but it adds the code
to `BootstrapMetadata`. It seems moving the code to `BootstrapMetadata` is easy
to write UT (`BootstrapMetadataTests`). WDYT?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]