branch: elpa/gptel commit ce042d692a5d31bb1aadcf5ad96817547b1b3045 Author: Henrik Ahlgren <pa...@seestieto.com> Commit: GitHub <nore...@github.com>
gptel: Use "model" instead of "GPT model" (#635) GPT, or Generative Pre-trained Transformer, is a designation originally used by OpenAI to describe their language models. Other companies or researchers might develop models with similar architectures, but "GPT" primarily associates with OpenAI's series. * gptel.el (gptel-model, gptel-mode): Use "model" instead of "GPT model". * README.md: Ditto. * gptel-transient.el (gptel--infix-provider): Ditto. --- README.org | 2 +- gptel-transient.el | 2 +- gptel.el | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/README.org b/README.org index 5f6ba6b5eb..f0b518df5e 100644 --- a/README.org +++ b/README.org @@ -887,7 +887,7 @@ gptel provides a few powerful, general purpose and flexible commands. You can d 2. If a region is selected, the conversation will be limited to its contents. 3. Call =M-x gptel-send= with a prefix argument (~C-u~) - - to set chat parameters (GPT model, backend, system message etc) for this buffer, + - to set chat parameters (model, backend, system message etc) for this buffer, - include quick instructions for the next request only, - to add additional context -- regions, buffers or files -- to gptel, - to read the prompt from or redirect the response elsewhere, diff --git a/gptel-transient.el b/gptel-transient.el index 3dbd97e311..14f2297eb5 100644 --- a/gptel-transient.el +++ b/gptel-transient.el @@ -909,7 +909,7 @@ responses." (transient-define-infix gptel--infix-provider () "AI Provider for Chat." - :description "GPT Model" + :description "Model" :class 'gptel-provider-variable :prompt "Model: " :variable 'gptel-backend diff --git a/gptel.el b/gptel.el index 9525475125..e6eb36425f 100644 --- a/gptel.el +++ b/gptel.el @@ -617,7 +617,7 @@ sources: (defcustom gptel-model 'gpt-4o-mini (concat - "GPT Model for chat. + "Model for chat. The name of the model, as a symbol. This is the name as expected by the LLM provider's API. @@ -1282,7 +1282,7 @@ file." (buttonize (concat "[" model "]") (lambda (&rest _) (gptel-menu))) 'mouse-face 'highlight - 'help-echo "GPT model in use")))))) + 'help-echo "Model in use")))))) (setq mode-line-process '(:eval (concat " " (buttonize (gptel--model-name gptel-model)