acsant commented on code in PR #15428:
URL: https://github.com/apache/lucene/pull/15428#discussion_r2535284878
##########
lucene/core/src/java/org/apache/lucene/index/StandardDirectoryReader.java:
##########
@@ -88,10 +95,37 @@ protected DirectoryReader doBody(String segmentFileName)
throws IOException {
SegmentInfos.readCommit(directory, segmentFileName,
minSupportedMajorVersion);
final SegmentReader[] readers = new SegmentReader[sis.size()];
try {
- for (int i = sis.size() - 1; i >= 0; i--) {
- readers[i] =
- new SegmentReader(
- sis.info(i), sis.getIndexCreatedVersionMajor(),
IOContext.DEFAULT);
+ if (executor != null) {
+ List<Future<SegmentReader>> futures = new ArrayList<>();
+ for (int i = sis.size() - 1; i >= 0; i--) {
+ final int index = i;
+ // parallelize segment reader initialization
+ futures.add(
+ (executor)
+ .submit(
Review Comment:
Have you considered using the existing `TaskExecutor` pattern and instead
implementing this as such:
```
TaskExecutor taskExecutor = new TaskExecutor(executorService);
List<Callable<SegmentReader>> tasks = new ArrayList<>(sis.size());
for (int i = 0; i < sis.size(); i++) {
final int index = i;
tasks.add(() -> new SegmentReader(
sis.info(index), sis.getIndexCreatedVersionMajor(),
IOContext.DEFAULT));
}
```
Based on the comments in that class, there are some optimizations we could
inherit here:
> // try to execute as many tasks as possible on the current thread to
minimize context
// switching in case of long running concurrent
// tasks as well as dead-locking if the current thread is part of
#executor for executors that
// have limited or no parallelism
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]