jpountz commented on code in PR #892:
URL: https://github.com/apache/lucene/pull/892#discussion_r886550379


##########
lucene/core/src/java/org/apache/lucene/codecs/lucene90/compressing/Lucene90CompressingStoredFieldsWriter.java:
##########
@@ -553,14 +554,20 @@ private void copyChunks(
     final long toPointer =
         toDocID == sub.maxDoc ? reader.getMaxPointer() : 
index.getStartPointer(toDocID);
     if (fromPointer < toPointer) {
-      if (numBufferedDocs > 0) {
-        flush(true);
-      }
       final IndexInput rawDocs = reader.getFieldsStream();
       rawDocs.seek(fromPointer);
       do {
         final int base = rawDocs.readVInt();
         final int code = rawDocs.readVInt();
+        final boolean dirtyChunk = (code & 2) != 0;
+        if (copyDirtyChunks) {
+          if (numBufferedDocs > 0) {
+            flush(true);
+          }
+        } else if (dirtyChunk || numBufferedDocs > 0) {
+          // Don't copy a dirty chunk or force a flush, which would create a 
dirty chunk
+          break;
+        }

Review Comment:
   Thanks for the good suggestions, I tried to address your feedback:
    - Logic is reorganized as Vigya suggested
    - DIRTY_BULK/CLEAN_BULK renamed to BULK/CLEAN_CHUNKS
    - Added the same functionality to term vectors
    - Removed `copyDirtyChunks`, passing the merge strategy directly



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to