[ 
https://issues.apache.org/jira/browse/KAFKA-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16813228#comment-16813228
 ] 

Jonathan Santilli commented on KAFKA-7656:
------------------------------------------

Enabling the TRACE level shows a lot, I can share a line before and after the 
error on the Leader Broker for partition *__consumer_offsets-14*:

 

 
{code:java}
[2019-04-09 11:05:26,877] TRACE [ReplicaManager broker=1] Fetching log segment 
for partition __consumer_offsets-14, offset 0, partition fetch size 
-2147483648, remaining response limit 2147483647 (kafka.server.ReplicaManager)
[2019-04-09 11:05:26,877] ERROR [ReplicaManager broker=1] Error processing 
fetch with max size -2147483648 from consumer on partition 
__consumer_offsets-14: (fetchOffset=0, logStartOffset=-1, maxBytes=-2147483648, 
currentLeaderEpoch=Optional.empty) (kafka.server.ReplicaManager)
java.lang.IllegalArgumentException: Invalid max size -2147483648 for log read 
from segment FileRecords(file= 
/opt/kafka/logdata/__consumer_offsets-14/00000000000000000000.log, start=0, 
end=2147483647)
 at kafka.log.LogSegment.read(LogSegment.scala:274)
 at kafka.log.Log.$anonfun$read$2(Log.scala:1245)
 at kafka.log.Log.maybeHandleIOException(Log.scala:2013)
 at kafka.log.Log.read(Log.scala:1200)
 at kafka.cluster.Partition.$anonfun$readRecords$1(Partition.scala:805)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
 at kafka.cluster.Partition.readRecords(Partition.scala:781)
 at kafka.server.ReplicaManager.read$1(ReplicaManager.scala:926)
 at 
kafka.server.ReplicaManager.$anonfun$readFromLocalLog$4(ReplicaManager.scala:991)
 at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
 at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
 at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:990)
 at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:840)
 at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:845)
 at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:723)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:109)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
 at java.lang.Thread.run(Thread.java:748)
[2019-04-09 11:05:26,879] TRACE [ReplicaManager broker=1] Fetching log segment 
for partition __consumer_offsets-26, offset 0, partition fetch size 1048576, 
remaining response limit 2147483647 (kafka.server.ReplicaManager){code}
 

 

I see the logs says *offset 0*, but the last offset at the moment was 
*53,858,848.* Maybe is not related, but worth to mention.

Now the leader is *broker=3*, and also there is showing the same Exception.

 

> ReplicaManager fetch fails on leader due to long/integer overflow
> -----------------------------------------------------------------
>
>                 Key: KAFKA-7656
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7656
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 2.0.1
>         Environment: Linux 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 
> EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
>            Reporter: Patrick Haas
>            Assignee: Jose Armando Garcia Sancio
>            Priority: Major
>
> (Note: From 2.0.1-cp1 from confluent distribution)
> {{[2018-11-19 21:13:13,687] ERROR [ReplicaManager broker=103] Error 
> processing fetch operation on partition __consumer_offsets-20, offset 0 
> (kafka.server.ReplicaManager)}}
> {{java.lang.IllegalArgumentException: Invalid max size -2147483648 for log 
> read from segment FileRecords(file= 
> /prod/kafka/data/kafka-logs/__consumer_offsets-20/00000000000000000000.log, 
> start=0, end=2147483647)}}
> {{ at kafka.log.LogSegment.read(LogSegment.scala:274)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114)}}
> {{ at kafka.log.Log.maybeHandleIOException(Log.scala:1842)}}
> {{ at kafka.log.Log.read(Log.scala:1114)}}
> {{ at 
> kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973)}}
> {{ at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)}}
> {{ at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)}}
> {{ at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973)}}
> {{ at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802)}}
> {{ at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815)}}
> {{ at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:685)}}
> {{ at kafka.server.KafkaApis.handle(KafkaApis.scala:114)}}
> {{ at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)}}
> {{ at java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to