branch: elpa/gptel
commit 104fa5d7bc7f561d7ae321706943117b46e4f660
Author: Karthik Chikmagalur <karthikchikmaga...@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmaga...@gmail.com>

    gptel: Update NEWS, README, package commentary
    
    NEWS: Fill out in prep for new release.
    
    README: Mention `gptel-prompt-transform-functions', add more
    mentions of presets.
    
    gptel.el: Update package commentary with new backends and
    description of the presets feature.
---
 NEWS       | 118 ++++++++++++++++++++++++++++++++++++++++++++-----------------
 README.org |  44 ++++++++++++++---------
 gptel.el   |  16 +++++++--
 3 files changed, 126 insertions(+), 52 deletions(-)

diff --git a/NEWS b/NEWS
index 7372684c68..d67b71ea67 100644
--- a/NEWS
+++ b/NEWS
@@ -21,19 +21,42 @@
 
 ** New models and backends
 
-- Add support for ~gemini-2.5-pro-exp-03-25~.
+- Add support for ~gpt-4.1~, ~gpt-4.1-mini~, ~gpt-4.1-nano~, ~o3~ and
+  ~o4-mini~.
 
-- Add support for ~gpt-4.1~, ~gpt-4.1-mini~ and ~gpt-4.1-nano~.
+- Add support for ~gemini-2.5-pro-exp-03-25~,
+  ~gemini-2.5-flash-preview-04-17~ and ~gemini-2.5-pro-preview-05-06~.
 
-- Add support for ~gemini-2.5-flash-preview-04-17~.
+- Add support for ~claude-sonnet-4-20250514~ and
+  ~claude-opus-4-20250514~.
 
-- Add support for ~gemini-2.5-pro-preview-05-06~.
+- Add support for AWS Bedrock models.  You can create an AWS Bedrock
+  gptel backend with ~gptel-make-bedrock~, which see.  Please note:
+  AWS Bedrock support requires Curl 8.5.0 or higher.
+
+- You can now create an xAI backend with ~gptel-make-xai~, which see.
+  (xAI was supported before but the model configuration is now handled
+  for you by this function.)
+
+- Add support for GitHub Copilot Chat.  See the README and
+  ~gptel-make-gh-copilot~.  Please note: this is only the chat
+  component of GitHub Copilot.  Copilot's ~completion-at-point~
+  (tab-completion) functionality is not supported by gptel.
+
+- Add support for Sambanova.  This is an OpenAI-compatible API so you
+  can create a backend with ~gptel-make-openai~, see the README for
+  details.
+
+- Add support for Mistral Le Chat.  This is an an OpenAI-compatible
+  API so you can create a backend with ~gptel-make-openai~, see the
+  README for details.
 
 ** New features and UI changes
 
-- gptel now supports handling reasoning/thinking blocks in responses from 
Gemini
-  models.  This is controlled by ~gptel-include-reasoning~, in the same way 
that
-  it handles other APIs.
+- gptel now supports handling reasoning/thinking blocks in responses
+  from Gemini models.  This is controlled by
+  ~gptel-include-reasoning~, in the same way that it handles other
+  APIs.
 
 - The new option ~gptel-curl-extra-args~ can be used to specify extra
   arguments to the Curl command used for the request.  This is the
@@ -41,34 +64,65 @@
   which can be used to specify Curl arguments when using a specific
   backend.
 
-- Tools now run in the buffer from which the request originates.
-
-- gptel can access MCP server tools by integrating with the mcp.el package,
-  which is at https://github.com/lizqwerscott/mcp.el.  (mcp.el is not yet
-  available in a package archive.)  To help with the integration, two new
-  commands are provided: ~gptel-mcp-connect~ and ~gptel-mcp-disconnect~.  You
-  can use these to start MCP servers selectively and add tools to gptel.  These
-  commands are also available from gptel's tools menu.  These commands are
-  currently not autoloaded by gptel.  To access them, require the
-  ~gptel-integrations~ feature.
-
-- You can now define "presets", which are a bundle of gptel options, such as 
the
-  backend, model, system message, included tools, temperature and so on.  This
-  set of options can be applied together, making it easy to switch between
-  different tasks using gptel.  From gptel's transient menu, you can save the
-  current configuration as a preset or apply another one.  Presets can be
-  applied globally, buffer-locally or for the next request only.  To persist
-  presets across Emacs sessions, define presets in your configuration
-  using ~gptel-make-preset~.
-
-- Links to plain-text files in chat buffers can be followed, and their contents
-  included with the request.  Using Org or Markdown links is an easy, 
intuitive,
-  persistent and buffer-local way to specify context.  To enable this behavior,
-  turn on ~gptel-track-media~, a pre-existing option which also controls 
whether
-  image/document links are followed and sent (when the model supports it).
+- Tools now run in the buffer from which the request originates.  This
+  can be significant when tools read or manipulate Emacs' state.
+
+- gptel can access MCP server tools by integrating with the mcp.el
+  package, which is at https://github.com/lizqwerscott/mcp.el.
+  (mcp.el is not yet available in a package archive.)  To help with
+  the integration, two new commands are provided: ~gptel-mcp-connect~
+  and ~gptel-mcp-disconnect~.  You can use these to start MCP servers
+  selectively and add tools to gptel.  These commands are also
+  available from gptel's tools menu.
+  
+  These commands are currently not autoloaded by gptel.  To access
+  them, require the ~gptel-integrations~ feature.
+
+- You can now define "presets", which are a bundle of gptel options,
+  such as the backend, model, system message, included tools,
+  temperature and so on.  This set of options can be applied together,
+  making it easy to switch between different tasks using gptel.  From
+  gptel's transient menu, you can save the current configuration as a
+  preset or apply another one.  Presets can be applied globally,
+  buffer-locally or for the next request only.  To persist presets
+  across Emacs sessions, define presets in your configuration using
+  ~gptel-make-preset~.
+
+- When using ~gptel-send~ from anywhere in Emacs, you can now include
+  a "cookie" of the form =@preset-name= in the prompt text to apply
+  that preset before sending.  The preset is applied for that request
+  only.  This is an easy way to switch models, tools, system
+  messages (etc) on the fly.  In chat buffers the preset cookie is
+  fontified and available for completion via ~completion-at-point~.
+
+- For scripting purposes, provide a ~gptel-with-preset~ macro to
+  create an environment with a preset applied.
+
+- Links to plain-text files in chat buffers can be followed, and their
+  contents included with the request.  Using Org or Markdown links is
+  an easy, intuitive, persistent and buffer-local way to specify
+  context.  To enable this behavior, turn on ~gptel-track-media~, a
+  pre-existing option which also controls whether image/document links
+  are followed and sent (when the model supports it).
+
+- A new hook ~gptel-prompt-transform-functions~ is provided for
+  arbitrary transformations of the prompt prior to sending a request.
+  This hook runs in a temporary buffer containing the text to be sent.
+  Any aspect of the request (the text, destination, request
+  parameters, response handling preferences) can be modified
+  buffer-locally here.  These hook functions can be asynchronous.
+
+- The user option ~gptel-use-curl~ can now be used to specify a Curl
+  path.
+
+- The current kill can be added to gptel's context.  To enable this,
+  turn on ~gptel-expert-commands~ and use gptel's transient menu. 
 
 ** Notable Bug fixes
 
+- Fix more Org markup conversion edge cases involving nested Markdown
+  delimiters.
+
 * 0.9.8 2025-03-13
 
 Version 0.9.8 adds support for new Gemini, Anthropic, OpenAI,
diff --git a/README.org b/README.org
index 6ee525c7b3..f84b337906 100644
--- a/README.org
+++ b/README.org
@@ -90,7 +90,6 @@ See also [[https://youtu.be/g1VMGhC5gRU][this youtube demo (2 
minutes)]] by Armi
 gptel uses Curl if available, but falls back to the built-in url-retrieve to 
work without external dependencies.
 
 ** Contents :toc:
-  - [[#breaking-changes][Breaking changes!]]
   - [[#installation][Installation]]
       - [[#straight][Straight]]
       - [[#manual][Manual]]
@@ -158,7 +157,7 @@ gptel uses Curl if available, but falls back to the 
built-in url-retrieve to wor
     - [[#packages-using-gptel][Packages using gptel]]
   - [[#acknowledgments][Acknowledgments]]
 
-** Breaking changes!
+** COMMENT Breaking changes!
 
 - =gptel-model= is now expected to be a symbol, not a string.  Please update 
your configuration.
 
@@ -1035,6 +1034,8 @@ gptel provides a few powerful, general purpose and 
flexible commands.  You can d
 
 #+html: <img 
src="https://github.com/karthink/gptel/assets/8607532/3562a6e2-7a5c-4f7e-8e57-bf3c11589c73";
 align="center" alt="Image showing gptel's menu with some of the available 
query options.">
 
+You can also define a "preset" bundle of options that are applied together, 
see [[#option-presets][Option presets]] below.
+
 *** In a dedicated chat buffer:
 
 *Note*: gptel works anywhere in Emacs.  The dedicated chat buffer only adds 
some conveniences.
@@ -1051,6 +1052,8 @@ That's it. You can go back and edit previous prompts and 
responses if you want.
 
 The default mode is =markdown-mode= if available, else =text-mode=.  You can 
set =gptel-default-mode= to =org-mode= if desired.
 
+You can also define a "preset" bundle of options that are applied together, 
see [[#option-presets][Option presets]] below.
+
 #+html: <details><summary>
 **** Including media (images, documents or plain-text files) with requests
 #+html: </summary>
@@ -1543,7 +1546,7 @@ Other Emacs clients for LLMs prescribe the format of the 
interaction (a comint s
 | =gptel-prompt-prefix-alist=   | Text inserted before queries.                
                  |
 | =gptel-response-prefix-alist= | Text inserted before responses.              
                  |
 | =gptel-track-response=        | Distinguish between user messages and LLM 
responses?           |
-| =gptel-track-media=           | Send images or other media from links?       
                  |
+| =gptel-track-media=           | Send text, images or other media from links? 
                  |
 | =gptel-confirm-tool-calls=    | Confirm all tool calls?                      
                  |
 | =gptel-include-tool-results=  | Include tool results in the LLM response?    
                  |
 | =gptel-use-header-line=       | Display status messages in header-line 
(default) or minibuffer |
@@ -1557,19 +1560,19 @@ Other Emacs clients for LLMs prescribe the format of 
the interaction (a comint s
 | =gptel-org-ignore-elements=   | Ignore parts of the buffer when sending a 
query       |
 
|-------------------------------+-------------------------------------------------------|
 
-|---------------------------------+-------------------------------------------------------------|
-| *Hooks for customization*       |                                            
                 |
-|---------------------------------+-------------------------------------------------------------|
-| =gptel-save-state-hook=         | Runs before saving the chat state to a 
file on disk         |
-| =gptel-prompt-filter-hook=      | Runs in a temp buffer to transform text 
before sending      |
-| =gptel-post-request-hook=       | Runs immediately after dispatching a 
=gptel-request=.       |
-| =gptel-pre-response-hook=       | Runs before inserting the LLM response 
into the buffer      |
-| =gptel-post-response-functions= | Runs after inserting the full LLM response 
into the buffer  |
-| =gptel-post-stream-hook=        | Runs after each streaming insertion        
                 |
-| =gptel-context-wrap-function=   | To include additional context formatted 
your way            |
-| =gptel-rewrite-default-action=  | Automatically diff, ediff, merge or 
replace refactored text |
-| =gptel-post-rewrite-functions=  | Runs after a =gptel-rewrite= request 
succeeds               |
-|---------------------------------+-------------------------------------------------------------|
+|------------------------------------+-------------------------------------------------------------|
+| *Hooks for customization*          |                                         
                    |
+|------------------------------------+-------------------------------------------------------------|
+| =gptel-save-state-hook=            | Runs before saving the chat state to a 
file on disk         |
+| =gptel-prompt-transform-functions= | Runs in a temp buffer to transform text 
before sending      |
+| =gptel-post-request-hook=          | Runs immediately after dispatching a 
=gptel-request=.       |
+| =gptel-pre-response-hook=          | Runs before inserting the LLM response 
into the buffer      |
+| =gptel-post-response-functions=    | Runs after inserting the full LLM 
response into the buffer  |
+| =gptel-post-stream-hook=           | Runs after each streaming insertion     
                    |
+| =gptel-context-wrap-function=      | To include additional context formatted 
your way            |
+| =gptel-rewrite-default-action=     | Automatically diff, ediff, merge or 
replace refactored text |
+| =gptel-post-rewrite-functions=     | Runs after a =gptel-rewrite= request 
succeeds               |
+|------------------------------------+-------------------------------------------------------------|
 
 #+html: </details>
 
@@ -1582,8 +1585,15 @@ Once defined, presets can be applied from gptel's 
transient menu:
 #+html: <img 
src="https://github.com/user-attachments/assets/e0cf6a32-d999-4138-8369-23512f5e9311";
 align="center" />
 #+html: <br>
 
-To define a preset, use the =gptel-make-preset= function, which takes a name 
and keyword-value pairs of settings:
+To define a preset, use the =gptel-make-preset= function, which takes a name 
and keyword-value pairs of settings.
+
+Presets can be used to set individual options.  Here is an example of a preset 
to set the system message (and do nothing else):
+#+begin_src emacs-lisp
+(gptel-make-preset 'explain
+  :system "Explain what this code does to a novice programmer.")
+#+end_src
 
+More generally, you can specify a bundle of options:
 #+begin_src emacs-lisp
 (gptel-make-preset 'gpt4coding                       ;preset name, a symbol
   :description "A preset optimized for coding tasks" ;for your reference
diff --git a/gptel.el b/gptel.el
index 0cb94b8371..c234ccc4f6 100644
--- a/gptel.el
+++ b/gptel.el
@@ -34,9 +34,10 @@
 ;;
 ;; gptel supports:
 ;;
-;; - The services ChatGPT, Azure, Gemini, Anthropic AI, Together.ai,
-;;   Perplexity, Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras,
-;;   Github Models, Novita AI, xAI and Kagi (FastGPT & Summarizer).
+;; - The services ChatGPT, Azure, Gemini, Anthropic AI, Together.ai, 
Perplexity,
+;;   Anyscale, OpenRouter, Groq, PrivateGPT, DeepSeek, Cerebras, Github Models,
+;;   GitHub Copilot chat, AWS Bedrock, Novita AI, xAI, Sambanova, Mistral Le
+;;   Chat and Kagi (FastGPT & Summarizer).
 ;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All
 ;;
 ;; Additionally, any LLM service (local or remote) that provides an
@@ -159,6 +160,15 @@
 ;; or fill in the region.  This is accessible via `gptel-rewrite', and also 
from
 ;; the `gptel-send' menu.
 ;;
+;; Presets
+;;
+;; Define a bundle of configuration (model, backend, system message, tools etc)
+;; as a "preset" that can be applied together, making it easy to switch between
+;; tasks in gptel.  Presets can be saved and applied from gptel's transient
+;; menu.  You can also include a cookie of the form "@preset-name" in the 
prompt
+;; to send a request with a preset applied.  This feature works everywhere, but
+;; preset cookies are also fontified in chat buffers.
+;;
 ;; gptel in Org mode:
 ;;
 ;; gptel offers a few extra conveniences in Org mode:

Reply via email to