abhijeetk88 commented on code in PR #15820:
URL: https://github.com/apache/kafka/pull/15820#discussion_r1628819130
##########
core/src/main/java/kafka/log/remote/RemoteLogManager.java:
##########
@@ -738,6 +750,23 @@ public void copyLogSegmentsToRemote(UnifiedLog log) throws
InterruptedException
isCancelled(), isLeader());
return;
}
+
+ copyQuotaManagerLock.lock();
+ try {
+ while (rlmCopyQuotaManager.isQuotaExceeded()) {
Review Comment:
I agree that the theadpool will look busy when the threads are waiting for
the quota to be available. However, we will also have another metric for
throttling which can be referred to verify if the copy operations are getting
throttled.
If we see **threadpool busy and copy being throttled -> copy quota is fully
utilized and there is nothing to do**
However, if **threadpool is busy and no throttles are happening -> there are
not enough threads in the pool and threadpool size needs to be increased**.
Skipping the current run (as Kamal pointed out) will result in ineffective
use of the quota because the next run of the task happens after 30 sec. Also,
this may cause starvation for some topic partitions. There is no fairness. For
eg. the copy for one topic partition gets skipped, but for another one is
picked immediately if the quota is available. These problems will continue to
exist if we schedule the next run with 1 sec delay.
It is also not simple to schedule a new RLMTask with a 1-sec delay (if we
skip the current run). The schedule of the tasks is already fixed because we
are using a ScheduledTheadPoolExecutor with _scheduleWithFixedDelay_ to
schedule the tasks. If we need to make the above change, the task needs to own
its own scheduling. i.e after each run, it schedules its next run. This will be
a larger change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]