[GitHub] [lucene] mocobeta commented on pull request #811: Add some basic tasks to help/workflow
mocobeta commented on PR #811: URL: https://github.com/apache/lucene/pull/811#issuecomment-1100650074 @dweiss would you mind if I push it to main? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-10518) FieldInfos consistency check can refuse to open Lucene 8 index
Nhat Nguyen created LUCENE-10518: Summary: FieldInfos consistency check can refuse to open Lucene 8 index Key: LUCENE-10518 URL: https://issues.apache.org/jira/browse/LUCENE-10518 Project: Lucene - Core Issue Type: Bug Components: core/index Affects Versions: 8.10.1 Reporter: Nhat Nguyen A field-infos consistency check introduced in Lucene 9 (LUCENE-9334) can refuse to open a Lucene 8 index. Lucene 8 can create a partial FieldInfo if hitting a non-aborting exception (for example [term is too long|https://github.com/apache/lucene-solr/blob/6a6484ba396927727b16e5061384d3cd80d616b2/lucene/core/src/java/org/apache/lucene/index/DefaultIndexingChain.java#L944]) during processing fields of a document. We don't have this problem in Lucene 9 as we process fields in two phases with the [first phase|https://github.com/apache/lucene/blob/10ebc099c846c7d96f4ff5f9b7853df850fa8442/lucene/core/src/java/org/apache/lucene/index/IndexingChain.java#L589-L614] processing only FieldInfos. The issue can be reproduced with this snippet. {code:java} public void testWriteIndexOn8x() throws Exception { FieldType KeywordField = new FieldType(); KeywordField.setTokenized(false); KeywordField.setOmitNorms(true); KeywordField.setIndexOptions(IndexOptions.DOCS); KeywordField.freeze(); try (Directory dir = newDirectory()) { IndexWriterConfig config = new IndexWriterConfig(); config.setCommitOnClose(false); config.setMergePolicy(NoMergePolicy.INSTANCE); try (IndexWriter writer = new IndexWriter(dir, config)) { // first segment writer.addDocument(new Document()); // an empty doc Document d1 = new Document(); byte[] chars = new byte[IndexWriter.MAX_STORED_STRING_LENGTH + 1]; Arrays.fill(chars, (byte) 'a'); d1.add(new Field("field", new BytesRef(chars), KeywordField)); d1.add(new BinaryDocValuesField("field", new BytesRef(chars))); expectThrows(IllegalArgumentException.class, () -> writer.addDocument(d1)); writer.flush(); // second segment Document d2 = new Document(); d2.add(new Field("field", new BytesRef("hello world"), KeywordField)); d2.add(new SortedDocValuesField("field", new BytesRef("hello world"))); writer.addDocument(d2); writer.flush(); writer.commit(); // Check for doc values types consistency Map docValuesTypes = new HashMap<>(); try(DirectoryReader reader = DirectoryReader.open(dir)){ for (LeafReaderContext leaf : reader.leaves()) { for (FieldInfo fi : leaf.reader().getFieldInfos()) { DocValuesType current = docValuesTypes.putIfAbsent(fi.name, fi.getDocValuesType()); if (current != null && current != fi.getDocValuesType()) { fail("cannot change DocValues type from " + current + " to " + fi.getDocValuesType() + " for field \"" + fi.name + "\""); } } } } } } } {code} I would like to propose to: - Backport the two-phase fields processing from Lucene9 to Lucene8. The patch should be small and contained. - Introduce an option in Lucene9 to skip checking field-infos consistency (i.e., behave like Lucene 8 when the option is enabled). /cc [~mayya] and [~jpountz] -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene] LuXugang commented on a diff in pull request #792: LUCENE-10502: Use IndexedDISI to store docIds and DirectMonotonicWriter/Reader to handle ordToDoc
LuXugang commented on code in PR #792: URL: https://github.com/apache/lucene/pull/792#discussion_r851637023 ## lucene/core/src/java/org/apache/lucene/codecs/lucene92/Lucene92HnswVectorsWriter.java: ## @@ -0,0 +1,328 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.codecs.lucene92; + +import static org.apache.lucene.codecs.lucene92.Lucene92HnswVectorsFormat.DIRECT_MONOTONIC_BLOCK_SHIFT; +import static org.apache.lucene.search.DocIdSetIterator.NO_MORE_DOCS; + +import java.io.IOException; +import java.util.Arrays; +import org.apache.lucene.codecs.CodecUtil; +import org.apache.lucene.codecs.KnnVectorsReader; +import org.apache.lucene.codecs.KnnVectorsWriter; +import org.apache.lucene.codecs.lucene90.IndexedDISI; +import org.apache.lucene.index.DocsWithFieldSet; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexFileNames; +import org.apache.lucene.index.RandomAccessVectorValuesProducer; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.index.VectorValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.store.IndexInput; +import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.IOUtils; +import org.apache.lucene.util.hnsw.HnswGraph.NodesIterator; +import org.apache.lucene.util.hnsw.HnswGraphBuilder; +import org.apache.lucene.util.hnsw.NeighborArray; +import org.apache.lucene.util.hnsw.OnHeapHnswGraph; +import org.apache.lucene.util.packed.DirectMonotonicWriter; + +/** + * Writes vector values and knn graphs to index segments. + * + * @lucene.experimental + */ +public final class Lucene92HnswVectorsWriter extends KnnVectorsWriter { + + private final SegmentWriteState segmentWriteState; + private final IndexOutput meta, vectorData, vectorIndex; + private final int maxDoc; + + private final int maxConn; + private final int beamWidth; + private boolean finished; + + Lucene92HnswVectorsWriter(SegmentWriteState state, int maxConn, int beamWidth) + throws IOException { +this.maxConn = maxConn; +this.beamWidth = beamWidth; + +assert state.fieldInfos.hasVectorValues(); +segmentWriteState = state; + +String metaFileName = +IndexFileNames.segmentFileName( +state.segmentInfo.name, state.segmentSuffix, Lucene92HnswVectorsFormat.META_EXTENSION); + +String vectorDataFileName = +IndexFileNames.segmentFileName( +state.segmentInfo.name, +state.segmentSuffix, +Lucene92HnswVectorsFormat.VECTOR_DATA_EXTENSION); + +String indexDataFileName = +IndexFileNames.segmentFileName( +state.segmentInfo.name, +state.segmentSuffix, +Lucene92HnswVectorsFormat.VECTOR_INDEX_EXTENSION); + +boolean success = false; +try { + meta = state.directory.createOutput(metaFileName, state.context); + vectorData = state.directory.createOutput(vectorDataFileName, state.context); + vectorIndex = state.directory.createOutput(indexDataFileName, state.context); + + CodecUtil.writeIndexHeader( + meta, + Lucene92HnswVectorsFormat.META_CODEC_NAME, + Lucene92HnswVectorsFormat.VERSION_CURRENT, + state.segmentInfo.getId(), + state.segmentSuffix); + CodecUtil.writeIndexHeader( + vectorData, + Lucene92HnswVectorsFormat.VECTOR_DATA_CODEC_NAME, + Lucene92HnswVectorsFormat.VERSION_CURRENT, + state.segmentInfo.getId(), + state.segmentSuffix); + CodecUtil.writeIndexHeader( + vectorIndex, + Lucene92HnswVectorsFormat.VECTOR_INDEX_CODEC_NAME, + Lucene92HnswVectorsFormat.VERSION_CURRENT, + state.segmentInfo.getId(), + state.segmentSuffix); + maxDoc = state.segmentInfo.maxDoc(); + success = true; +} finally { + if (success == false) { +IOUtils.closeWhileHandlingException(this); + } +} + } + + @Override + public void writeField(FieldInfo fieldInfo, KnnVectorsReader knnVectorsReader) + throws IOException { +long v
[GitHub] [lucene] LuXugang commented on pull request #792: LUCENE-10502: Use IndexedDISI to store docIds and DirectMonotonicWriter/Reader to handle ordToDoc
LuXugang commented on PR #792: URL: https://github.com/apache/lucene/pull/792#issuecomment-1100685721 > @LuXugang Thanks a lot for your work. I was thinking may be a better way to present these changes is to leave all formats changes to a later PR. And for this PR just to make changes on `Lucene91HnswVectorsWriter` and `Lucene91HnswVectorsReader` directly, otherwise it is difficult to see what exactly has been changed. And for a later PR we can incorporate all the format changes by renaming modified `Lucene92HnswVectorsWriter` to `Lucene92HnswVectorsWriter` etc.. WDYT? No problem , I will do force pushing to only present the core changes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene] yixunx commented on pull request #756: LUCENE-10470: [Tessellator] Prevent bridges that introduce collinear edges
yixunx commented on PR #756: URL: https://github.com/apache/lucene/pull/756#issuecomment-1100790763 Hi @iverase it turns out that many of my failed shapes failed because they contain self-intersections and I set `checkSelfIntersections = true` when running the tessellator. They do not run into exceptions if I set the flag to false. I did get one shape that failed with "unable to tessellate" though: ``` { "type": "Polygon", "coordinates": [ [ [ 33.45959656411607, -55.440650754647535 ], [ 33.45959656411607, -55.4409285324253 ], [ 33.45904100856052, -55.4409285324253 ], [ 33.45904100856052, -55.441206310203086 ], [ 33.458763230782736, -55.441206310203086 ], [ 33.458763230782736, -55.44231742131419 ], [ 33.4593187863383, -55.44231742131419 ], [ 33.4593187863383, -55.44259519909197 ], [ 33.45959656411607, -55.44259519909197 ], [ 33.45959656411607, -55.44231742131419 ], [ 33.45987434189385, -55.44231742131419 ], [ 33.45987434189385, -55.44259519909197 ], [ 33.46015211967163, -55.44259519909197 ], [ 33.46015211967163, -55.44287297686975 ], [ 33.460429897449416, -55.44287297686975 ], [ 33.460429897449416, -55.44315075464752 ], [ 33.46070767522719, -55.44315075464752 ], [ 33.46070767522719, -55.44287297686975 ], [ 33.460985453004966, -55.44287297686975 ], [ 33.460985453004966, -55.44231742131419 ], [ 33.46126323078274, -55.44231742131419 ], [ 33.46126323078274, -55.44176186575864 ], [ 33.460985453004966, -55.44176186575864 ], [ 33.460985453004966, -55.441206310203086 ], [ 33.46126323078274, -55.441206310203086 ], [ 33.46126323078274, -55.440650754647535 ], [ 33.461541008560516, -55.440650754647535 ], [ 33.461541008560516, -55.44037297686975 ], [ 33.4618187863383, -55.44037297686975 ], [ 33.4618187863383, -55.44009519909197 ], [ 33.463207675227196, -55.44009519909197 ], [ 33.463207675227196, -55.43926186575864 ], [ 33.46292989744941, -55.43926186575864 ], [ 33.46292989744941, -55.43898408798086 ], [ 33.46265211967163, -55.43898408798086 ], [ 33.46265211967163, -55.43926186575864 ], [ 33.46070767522719, -55.43926186575864 ], [ 33.46070767522719, -55.43898408798086 ], [ 33.46015211967163, -55.43898408798086 ], [ 33.46015211967163, -55.43953964353642 ], [ 33.45987434189385, -55.43953964353642 ], [ 33.45987434189385, -55.43981742131419 ], [ 33.45959656411607, -55.43981742131419 ], [ 33.45959656411607, -55.44037297686975 ], [ 33.4593187863383, -55.44037297686975 ], [ 33.4593187863383, -55.440650754647535 ], [ 33.45959656411607, -55.440650754647535 ] ], [ [ 33.460985453004966, -55.4409285324253 ], [ 33.460985453004966, -55.441206310203086 ], [ 33.46070767522719, -55.441206310203086 ], [ 33.46070767522719, -55.4409285324253 ], [ 33.460985453004966, -55.4409285324253 ] ], [ [ 33.46070767522719, -55.44009519909197 ], [ 33.46070767522719, -55.4409285324253
[GitHub] [lucene] gautamworah96 commented on pull request #779: LUCENE-10488: Optimize Facets#getTopDims in IntTaxonomyFacets
gautamworah96 commented on PR #779: URL: https://github.com/apache/lucene/pull/779#issuecomment-1100800360 I have not taken a detailed look at the PR yet. From the benchmark results posted, I see that we've got a regression in several taxonomy tasks that utilize BInaryDocValues. See (-15, -10, -10, -10). Would you mind re-running the benchmark just to be sure @Yuti-G (also verify that the only difference between mainline and candidate are your commits). Thanks for the effort btw! ``` TaskQPS baseline StdDevQPS candidate StdDev Pct diff p-value BrowseMonthTaxoFacets 19.31 (43.0%) 16.36 (20.9%) -15.3% ( -55% - 85%) 0.153 BrowseDayOfYearTaxoFacets 17.31 (29.5%) 15.50 (19.3%) -10.4% ( -45% - 54%) 0.186 BrowseDateTaxoFacets 17.15 (29.9%) 15.36 (19.7%) -10.4% ( -46% - 55%) 0.194 BrowseRandomLabelTaxoFacets 14.70 (29.6%) 13.19 (18.5%) -10.3% ( -44% - 53%) 0.189 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org