[
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17077642#comment-17077642
]
Fernando Grijalba commented on KAFKA-1194:
------------------------------------------
I'm experiencing this issue in version 2.4.1.
{code:java}
[2020-04-05 17:49:18,878] ERROR Error while deleting segments for topic-name in
dir C:\Kafka\data\kafka-logs (kafka.server.LogDirFailureChannel)[2020-04-05
17:49:18,878] ERROR Error while deleting segments for topic-name in dir
C:\Kafka\data\kafka-logs
(kafka.server.LogDirFailureChannel)java.nio.file.FileSystemException:
C:\Kafka\data\kafka-logs\topic-name\00000000000000000000.timeindex ->
C:\Kafka\data\kafka-logs\topic-name\00000000000000000000.timeindex.deleted: The
process cannot access the file because it is being used by another process.
at
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:395) at
java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
at java.base/java.nio.file.Files.move(Files.java:1425) at
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:795) at
kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:209) at
kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:497) at
kafka.log.Log.$anonfun$deleteSegmentFiles$1(Log.scala:2225) at
kafka.log.Log.$anonfun$deleteSegmentFiles$1$adapted(Log.scala:2225) at
scala.collection.immutable.List.foreach(List.scala:392) at
kafka.log.Log.deleteSegmentFiles(Log.scala:2225) at
kafka.log.Log.removeAndDeleteSegments(Log.scala:2210) at
kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1722) at
scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23) at
kafka.log.Log.maybeHandleIOException(Log.scala:2335) at
kafka.log.Log.deleteSegments(Log.scala:1713) at
kafka.log.Log.deleteOldSegments(Log.scala:1708) at
kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1785) at
kafka.log.Log.deleteOldSegments(Log.scala:1775) at
kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:982) at
kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:979) at
scala.collection.immutable.List.foreach(List.scala:392) at
kafka.log.LogManager.cleanupLogs(LogManager.scala:979) at
kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:403) at
kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:116) at
kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:65) at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:830) Suppressed:
java.nio.file.FileSystemException:
C:\Kafka\data\kafka-logs\topic-name\00000000000000000000.timeindex ->
C:\Kafka\data\kafka-logs\topic-name\00000000000000000000.timeindex.deleted: The
process cannot access the file because it is being used by another process.
at
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:309) at
java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:292)
at java.base/java.nio.file.Files.move(Files.java:1425) at
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:792) ...
27 more
{code}
Any updates on when a fix is expected? Any known work around for version 2.4.1?
Thank you,
> The kafka broker cannot delete the old log files after the configured time
> --------------------------------------------------------------------------
>
> Key: KAFKA-1194
> URL: https://issues.apache.org/jira/browse/KAFKA-1194
> Project: Kafka
> Issue Type: Bug
> Components: log
> Affects Versions: 0.10.0.0, 0.11.0.0, 1.0.0
> Environment: window
> Reporter: Tao Qin
> Priority: Critical
> Labels: features, patch, windows
> Attachments: KAFKA-1194.patch, RetentionExpiredWindows.txt,
> Untitled.jpg, image-2018-09-12-14-25-52-632.png,
> image-2018-11-26-10-18-59-381.png, kafka-1194-v1.patch, kafka-1194-v2.patch,
> kafka-bombarder.7z, screenshot-1.png
>
> Original Estimate: 72h
> Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file.
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from
> to .deleted for log segment 1516723
> at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
> at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
> at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
> at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
> at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
> at
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
> at scala.collection.immutable.List.foreach(List.scala:76)
> at kafka.log.Log.deleteOldSegments(Log.scala:418)
> at
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
> at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
> at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
> at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
> at scala.collection.Iterator$class.foreach(Iterator.scala:772)
> at
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
> at
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
> at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
> at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
> at
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
> at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it
> is still opened. So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can
> be used to free the MappedByteBuffer.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)