goankur commented on code in PR #13572:
URL: https://github.com/apache/lucene/pull/13572#discussion_r1828576127


##########
lucene/core/src/java21/org/apache/lucene/internal/vectorization/PanamaVectorUtilSupport.java:
##########
@@ -291,25 +296,125 @@ private float squareDistanceBody(float[] a, float[] b, 
int limit) {
     return res1.add(res2).reduceLanes(ADD);
   }
 
-  // Binary functions, these all follow a general pattern like this:
-  //
-  //   short intermediate = a * b;
-  //   int accumulator = (int)accumulator + (int)intermediate;
-  //
-  // 256 or 512 bit vectors can process 64 or 128 bits at a time, respectively
-  // intermediate results use 128 or 256 bit vectors, respectively
-  // final accumulator uses 256 or 512 bit vectors, respectively
-  //
-  // We also support 128 bit vectors, going 32 bits at a time.
-  // This is slower but still faster than not vectorizing at all.
-
+  /**
+   * This method SHOULD NOT be used directly when the native dot-product is 
enabled. This is because
+   * it allocates off-heap memory in the hot code-path and copies the input 
byte-vectors onto it.
+   * This is necessary even with Panama APIs because native code is given a 
pointer to a memory
+   * address and if the content at that address can be moved at anytime (by 
Java GC in case of heap
+   * allocated memory) then safety of the native computation cannot be 
guaranteed. If we try to pass
+   * MemorySegment backing on-heap memory to native code we get
+   * "java.lang.UnsupportedOperationException: Not a native address"
+   *
+   * <p>Stack overflow thread:
+   * 
https://stackoverflow.com/questions/69521289/jep-412-pass-a-on-heap-byte-array-to-native-code-getting-unsupportedoperatione
+   * explains the issue in more detail.
+   *
+   * <p>Q1. Why did we enable the native code path here if its inefficient ? 
A1. So that it can be
+   * exercised in <b>unit-tests</b> to test correctness of underlying 
vectorized implementation in C
+   * without directly relying on preview Panama APIs. Without this, unit-tests 
would need to use
+   * reflection to 1) Get a method handle of {@link #dotProduct(MemorySegment, 
MemorySegment)} as it
+   * is not part of {@link VectorUtilSupport} 2) Get a method handle of a 
to-be-defined method that
+   * would copy byte[] to off-heap {@link MemorySegment} and return them as 
Objects 3) Invoke the
+   * method handle retrieved in 1) with Objects in 2) All doable but adds 
unnecessary complexity in
+   * the unit tests suite.
+   *
+   * <p>Q2. Which method should HNSW scoring components use for computing 
dot-product in native code
+   * on byte-vectors? A2. They should use {@link #dotProduct(MemorySegment, 
MemorySegment)} directly
+   * and control the creation and reuse of off-heap MemorySegments as 
dotProduct is in the hot code
+   * path for both indexing and searching. It is easy to see that 
stored-vectors residing in
+   * Memory-mapped files can simply be accessed using {@link 
MemorySegment#asSlice(long, long,
+   * long)} which does not allocate new memory and does not require copying 
vector data onto JVM
+   * heap. For target-vector, copying to off-heap memory will still be needed 
but allocation can
+   * happen once per scorer.
+   *
+   * <p>Q3. Should JMH benchmarks measure the performance of this method? A3. 
No, because they would

Review Comment:
   The `assert false:...` won't be necessary after undoing the changes as the 
old code wraps input `byte[]` into on-heap MemorySegment and native 
implementation is not exercised with that condition.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to