branch: elpa/gptel
commit 030b22ba13db8f05ebc821a35417de46e1a7fd5b
Author: Karthik Chikmagalur <[email protected]>
Commit: Karthik Chikmagalur <[email protected]>

    gptel-openai: Handle max_tokens for gpt-5.1 correctly
    
    * gptel-openai.el (gptel--request-data): gpt-5.1 expects
    max_completion_tokens for specifying the token count when used
    with the OpenAI API, same as OpenAI's newer reasoning models.  Add
    this model to the list of exceptions when constructing the request
    payload.
---
 gptel-openai.el | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gptel-openai.el b/gptel-openai.el
index d3eea8e99c4..29e7bb022d3 100644
--- a/gptel-openai.el
+++ b/gptel-openai.el
@@ -300,7 +300,7 @@ Mutate state INFO with response metadata."
            :stream ,(or gptel-stream :json-false)))
         (reasoning-model-p ; TODO: Embed this capability in the model's 
properties
          (memq gptel-model '(o1 o1-preview o1-mini o3-mini o3 o4-mini
-                                gpt-5 gpt-5-mini gpt-5-nano))))
+                                gpt-5 gpt-5-mini gpt-5-nano gpt-5.1))))
     (when (and gptel-temperature (not reasoning-model-p))
       (plist-put prompts-plist :temperature gptel-temperature))
     (when gptel-use-tools

Reply via email to