[ https://issues.apache.org/jira/browse/LUCENE-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223706#comment-17223706 ]
Jim Ferenczi commented on LUCENE-9583: -------------------------------------- By "wrong message" I mean that we require two implementations where only one is needed. It will be difficult to optimize one type of access without hurting the other so I'd lean toward a single pattern. If it's random access so be it but the pros/cons should be considered carefully. The forward iterator design is constraining but it also forces to think about how to access the data efficiently. > Yes, think of parent/child index with vectors only on the parent I see these ordinals as an optimization detail of how the graph stores things. I don't think they should be exposed at all since the user should interact with doc ids directly. It's something that could come later if needed but that sounds like a complexity that we could avoid when introducing a new format. We don't need to optimize for the sparse case, at least not yet ;). > How should we expose VectorValues.RandomAccess? > ----------------------------------------------- > > Key: LUCENE-9583 > URL: https://issues.apache.org/jira/browse/LUCENE-9583 > Project: Lucene - Core > Issue Type: Improvement > Reporter: Michael Sokolov > Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > In the newly-added {{VectorValues}} API, we have a {{RandomAccess}} > sub-interface. [~jtibshirani] pointed out this is not needed by some > vector-indexing strategies which can operate solely using a forward-iterator > (it is needed by HNSW), and so in the interest of simplifying the public API > we should not expose this internal detail (which by the way surfaces internal > ordinals that are somewhat uninteresting outside the random access API). > I looked into how to move this inside the HNSW-specific code and remembered > that we do also currently make use of the RA API when merging vector fields > over sorted indexes. Without it, we would need to load all vectors into RAM > while flushing/merging, as we currently do in > {{BinaryDocValuesWriter.BinaryDVs}}. I wonder if it's worth paying this cost > for the simpler API. > Another thing I noticed while reviewing this is that I moved the KNN > {{search(float[] target, int topK, int fanout)}} method from {{VectorValues}} > to {{VectorValues.RandomAccess}}. This I think we could move back, and > handle the HNSW requirements for search elsewhere. I wonder if that would > alleviate the major concern here? -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org