C0urante commented on code in PR #14372:
URL: https://github.com/apache/kafka/pull/14372#discussion_r1669161112


##########
connect/runtime/src/main/java/org/apache/kafka/connect/util/KafkaBasedLog.java:
##########
@@ -567,13 +567,12 @@ private class WorkThread extends Thread {
         public WorkThread() {
             super("KafkaBasedLog Work Thread - " + topic);
         }
-
         @Override
         public void run() {
-            try {
-                log.trace("{} started execution", this);
-                while (true) {
-                    int numCallbacks;
+            while (true) {
+                int numCallbacks = 0;
+                try {
+                    log.trace("{} started execution", this);

Review Comment:
   Shouldn't this stay outside the loop?



##########
connect/runtime/src/test/java/org/apache/kafka/connect/runtime/WorkerTest.java:
##########
@@ -2156,6 +2157,42 @@ public void testAlterOffsetsSourceConnector(boolean 
enableTopicCreation) throws
         verifyKafkaClusterId();
     }
 
+    @ParameterizedTest
+    @ValueSource(booleans = {true, false})
+    public void testGetSourceConnectorOffsetsFetchError(boolean 
enableTopicCreation) {
+        setup(enableTopicCreation);
+        mockKafkaClusterId();
+
+        ConnectorOffsetBackingStore offsetStore = 
mock(ConnectorOffsetBackingStore.class);
+        CloseableOffsetStorageReader offsetReader = 
mock(CloseableOffsetStorageReader.class);
+
+        Set<Map<String, Object>> connectorPartitions =
+            Collections.singleton(Collections.singletonMap("partitionKey", 
"partitionValue"));
+
+        
when(executorService.submit(any(Runnable.class))).thenAnswer(invocation -> {
+            invocation.getArgument(0, Runnable.class).run();
+            return null;
+        });
+        worker = new Worker(WORKER_ID, new MockTime(), plugins, config, 
offsetBackingStore, executorService,
+            allConnectorClientConfigOverridePolicy, null);
+        worker.start();
+
+        
when(offsetStore.connectorPartitions(CONNECTOR_ID)).thenReturn(connectorPartitions);
+        doAnswer(invocation -> {
+            throw new ExecutionException(new 
SaslAuthenticationException("error"));
+        }).when(offsetReader).offsets(connectorPartitions);

Review Comment:
   Isn't this impossible in the wild? `ExecutionException` is a checked 
exception, and it isn't included in the signature of 
[OffsetStorageReader::offsets](https://github.com/apache/kafka/blob/515cdbb707fc808f796085b309be81e3e27e4312/connect/api/src/main/java/org/apache/kafka/connect/storage/OffsetStorageReader.java#L60).
   
   Maybe you meant to wrap this in a `ConnectException`, which would match the 
logic in 
[OffsetStorageReaderImpl::offsets](https://github.com/apache/kafka/blob/515cdbb707fc808f796085b309be81e3e27e4312/connect/runtime/src/main/java/org/apache/kafka/connect/storage/OffsetStorageReaderImpl.java#L113-L116)?



##########
connect/runtime/src/test/java/org/apache/kafka/connect/util/KafkaBasedLogTest.java:
##########
@@ -403,6 +404,54 @@ public void testGetOffsetsConsumerErrorOnReadToEnd() 
throws Exception {
         verifyStartAndStop();
     }
 
+    @Test
+    public void testOffsetReadFailureWhenWorkThreadFails() throws Exception {
+        RuntimeException exception = new RuntimeException();
+        Set<TopicPartition> tps = new HashSet<>(Arrays.asList(TP0, TP1));
+        Map<TopicPartition, Long> endOffsets = new HashMap<>();
+        endOffsets.put(TP0, 0L);
+        endOffsets.put(TP1, 0L);
+        admin = mock(TopicAdmin.class);
+        when(admin.endOffsets(eq(tps)))
+            .thenReturn(endOffsets)
+            .thenThrow(exception)
+            .thenReturn(endOffsets);
+
+        store.start();
+
+        AtomicInteger numSuccesses = new AtomicInteger();
+        AtomicInteger numFailures = new AtomicInteger();
+        AtomicReference<FutureCallback<Void>> finalSuccessCallbackRef = new 
AtomicReference<>();
+        final FutureCallback<Void> successCallback = new 
FutureCallback<>((error, result) -> numSuccesses.getAndIncrement());
+        final FutureCallback<Void> firstFailedCallback = new 
FutureCallback<>((error, result) -> {
+            numFailures.getAndIncrement();
+            // We issue another readToEnd call here to simulate the case that 
more read requests can come in while
+            // the failure is being handled in the WorkThread.
+            final FutureCallback<Void> finalSuccessCallback = new 
FutureCallback<>((e, r) -> numSuccesses.getAndIncrement());
+            finalSuccessCallbackRef.set(finalSuccessCallback);
+            store.readToEnd(finalSuccessCallback);
+        });
+        final FutureCallback<Void> subsequentFailedCallback = new 
FutureCallback<>((error, result) -> numFailures.getAndIncrement());
+
+        store.readToEnd(successCallback);
+        store.readToEnd(firstFailedCallback);
+        store.readToEnd(subsequentFailedCallback);

Review Comment:
   Isn't this at risk of being flaky? Is there anything that prevents multiple 
callbacks from being added in rapid succession and all handled in the same 
iteration of the `WorkThread` loop?
   
   Maybe instead of setting up all of our expectations for the topic admin in 
bulk and then issuing three consecutive calls to `readToEnd`, we could handle 
each case with its own expectation and call to `readToEnd` before moving on to 
the next?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to