loicgreffier opened a new pull request, #16684:
URL: https://github.com/apache/kafka/pull/16684

   @cadonna @mjsax 
   
   After https://github.com/apache/kafka/pull/16093 has been merged, there is a 
scenario where processing exception handling ends with `NullPointerException`:
   
   ```java
   Caused by: java.lang.NullPointerException: Cannot invoke 
"org.apache.kafka.clients.consumer.ConsumerRecord.key()" because the return 
value of 
"org.apache.kafka.streams.processor.internals.ProcessorRecordContext.rawRecord()"
 is null
        at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:212)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:292)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:271)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:229)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$setFlushListener$6(MeteredWindowStore.java:190)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:125)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:100)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:159) 
~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:117) 
~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:148)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.CachingWindowStore.flushCache(CachingWindowStore.java:426)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.state.internals.WrappedStateStore.flushCache(WrappedStateStore.java:87)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
        at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flushCache(ProcessorStateManager.java:537)
 ~[kafka-streams-3.9.0-SNAPSHOT.jar:na]
   ```
   
   This happened with the following topology:
   
   ```java
   builder
     .stream()
     .groupByKey()
     .windowedBy(...) // Does not really matter. NPE thrown windowing or not
     .aggregate(...)
     .mapValues(value -> throw new RuntimeException(...))
   ```
   
   Raw record, that has been added to `ProcessorRecordContext`, is lost when 
caching store. This PR fixes it. Looking forward to provide unit tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to