[
https://issues.apache.org/jira/browse/KAFKA-17184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Francois Visconte updated KAFKA-17184:
--------------------------------------
Description:
We have a tiered storage cluster where some consumers are constantly lagging
behind.
On this cluster, we get a ton of error logs and fail fetches with the following
symptom:
{code:java}
java.lang.IllegalStateException: This entry is marked for cleanup
at
org.apache.kafka.storage.internals.log.RemoteIndexCache$Entry.lookupOffset(RemoteIndexCache.java:569)
at
org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at
kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1445)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1391)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
{code}
I believe this should be handled differently by either
* Set log level to warn or info
* reload the index entry synchronously when an offset is requested and the
entry is marked for cleanup.
We do use the default setting for
{{remote.log.index.file.cache.total.size.bytes}} (1GiB).
was:
We have a tiered storage cluster where some consumers are constantly lagging
behind.
On this cluster, we get a ton of error logs and fail fetches with the following
symptom:
{code:java}
java.lang.IllegalStateException: This entry is marked for cleanup
at
org.apache.kafka.storage.internals.log.RemoteIndexCache$Entry.lookupOffset(RemoteIndexCache.java:569)
at
org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
at
kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1445)
at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1391)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
{code}
I believe this should be handled differently:
* Log should be warn or info
* We should reload the index when an offset is requested and the entry is
marked for cleanup.
We do use the default setting for
{{remote.log.index.file.cache.total.size.bytes}} (1GiB).
> Remote index cache noisy logging
> --------------------------------
>
> Key: KAFKA-17184
> URL: https://issues.apache.org/jira/browse/KAFKA-17184
> Project: Kafka
> Issue Type: Bug
> Reporter: Francois Visconte
> Priority: Critical
>
> We have a tiered storage cluster where some consumers are constantly lagging
> behind.
> On this cluster, we get a ton of error logs and fail fetches with the
> following symptom:
> {code:java}
> java.lang.IllegalStateException: This entry is marked for cleanup
> at
> org.apache.kafka.storage.internals.log.RemoteIndexCache$Entry.lookupOffset(RemoteIndexCache.java:569)
> at
> org.apache.kafka.storage.internals.log.RemoteIndexCache.lookupOffset(RemoteIndexCache.java:446)
> at
> kafka.log.remote.RemoteLogManager.lookupPositionForOffset(RemoteLogManager.java:1445)
> at kafka.log.remote.RemoteLogManager.read(RemoteLogManager.java:1391)
> at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:62)
> at kafka.log.remote.RemoteLogReader.call(RemoteLogReader.java:31)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at java.base/java.lang.Thread.run(Thread.java:840)
> {code}
> I believe this should be handled differently by either
> * Set log level to warn or info
> * reload the index entry synchronously when an offset is requested and the
> entry is marked for cleanup.
> We do use the default setting for
> {{remote.log.index.file.cache.total.size.bytes}} (1GiB).
--
This message was sent by Atlassian Jira
(v8.20.10#820010)