branch: externals/llm
commit 7802efbe06c4674c773b547f8a71530deb27d6a0
Author: Andrew Hyatt <ahy...@gmail.com>
Commit: GitHub <nore...@github.com>

    Return all embeddings in llm-batch-embedding-async (#185)
    
    This should fix https://github.com/ahyatt/llm/issues/184.
---
 NEWS.org              | 1 +
 llm-provider-utils.el | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/NEWS.org b/NEWS.org
index beab8d54a4..0f3b7cd859 100644
--- a/NEWS.org
+++ b/NEWS.org
@@ -1,6 +1,7 @@
 * Version 0.25.0
 - Add =llm-ollama-authed= provider, which is like Ollama but takes a key.
 - Set Gemini 2.5 Pro to be the default Gemini model
+- Fix =llm-batch-embeddings-async= so it returns all embeddings
 * Version 0.24.2
 - Fix issue with some Open AI compatible providers needing models to be passed 
by giving a non-nil default.
 - Add Gemini 2.5 Pro
diff --git a/llm-provider-utils.el b/llm-provider-utils.el
index c75c2e22ac..6fc1e113e5 100644
--- a/llm-provider-utils.el
+++ b/llm-provider-utils.el
@@ -300,7 +300,7 @@ return a list of `llm-chat-prompt-tool-use' structs.")
                         err-msg)
                      (llm-provider-utils-callback-in-buffer
                       buf vector-callback
-                      (llm-provider-embedding-extract-result provider data))))
+                      (llm-provider-batch-embeddings-extract-result provider 
data))))
      :on-error (lambda (_ data)
                  (llm-provider-utils-callback-in-buffer
                   buf error-callback 'error

Reply via email to