branch: elpa/gptel
commit db4e5c7ea39f7d58d8ca4856c98a6df71b1e18bf
Author: wlauppe <whlau...@web.de>
Commit: GitHub <nore...@github.com>

    Readme: Add Sambanova documentation (#803)
    
    * README.org: Add Sambanova documentation.
---
 README.org | 79 ++++++++++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 54 insertions(+), 25 deletions(-)

diff --git a/README.org b/README.org
index f7dcfb66c7..1bebbbf406 100644
--- a/README.org
+++ b/README.org
@@ -8,31 +8,32 @@
 gptel is a simple Large Language Model chat client for Emacs, with support for 
multiple models and backends.  It works in the spirit of Emacs, available at 
any time and uniformly in any buffer.
 
 #+html: <div align="center">
-| LLM Backend        | Supports | Requires                   |
-|--------------------+----------+----------------------------|
-| ChatGPT            | ✓        | 
[[https://platform.openai.com/account/api-keys][API key]]                    |
-| Anthropic (Claude) | ✓        | [[https://www.anthropic.com/api][API key]]   
                 |
-| Gemini             | ✓        | 
[[https://makersuite.google.com/app/apikey][API key]]                    |
-| Ollama             | ✓        | [[https://ollama.ai/][Ollama running 
locally]]     |
-| Llama.cpp          | ✓        | 
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
 running locally]]  |
-| Llamafile          | ✓        | 
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile 
server]]     |
-| GPT4All            | ✓        | [[https://gpt4all.io/index.html][GPT4All 
running locally]]    |
-| Kagi FastGPT       | ✓        | [[https://kagi.com/settings?p=api][API key]] 
                   |
-| Kagi Summarizer    | ✓        | [[https://kagi.com/settings?p=api][API key]] 
                   |
-| Azure              | ✓        | Deployment and API key     |
-| Groq               | ✓        | [[https://console.groq.com/keys][API key]]   
                 |
-| Mistral Le Chat    | ✓        | [[https://console.mistral.ai/api-keys][API 
key]]                    |
-| Perplexity         | ✓        | 
[[https://docs.perplexity.ai/docs/getting-started][API key]]                    
|
-| OpenRouter         | ✓        | [[https://openrouter.ai/keys][API key]]      
              |
-| together.ai        | ✓        | 
[[https://api.together.xyz/settings/api-keys][API key]]                    |
-| Anyscale           | ✓        | [[https://docs.endpoints.anyscale.com/][API 
key]]                    |
-| PrivateGPT         | ✓        | 
[[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running 
locally]] |
-| DeepSeek           | ✓        | 
[[https://platform.deepseek.com/api_keys][API key]]                    |
-| Cerebras           | ✓        | [[https://cloud.cerebras.ai/][API key]]      
              |
-| Github Models      | ✓        | 
[[https://github.com/settings/tokens][Token]]                      |
-| Novita AI          | ✓        | 
[[https://novita.ai/model-api/product/llm-api?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][Token]]
                      |
-| xAI                | ✓        | 
[[https://console.x.ai?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][API
 key]]                    |
-| Github CopilotChat | ✓        | Github account             |
+| LLM Backend          | Supports | Requires                   |
+|----------------------+----------+----------------------------|
+| ChatGPT              | ✓        | 
[[https://platform.openai.com/account/api-keys][API key]]                    |
+| Anthropic (Claude)   | ✓        | [[https://www.anthropic.com/api][API key]] 
                   |
+| Gemini               | ✓        | 
[[https://makersuite.google.com/app/apikey][API key]]                    |
+| Ollama               | ✓        | [[https://ollama.ai/][Ollama running 
locally]]     |
+| Llama.cpp            | ✓        | 
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
 running locally]]  |
+| Llamafile            | ✓        | 
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile 
server]]     |
+| GPT4All              | ✓        | [[https://gpt4all.io/index.html][GPT4All 
running locally]]    |
+| Kagi FastGPT         | ✓        | [[https://kagi.com/settings?p=api][API 
key]]                    |
+| Kagi Summarizer      | ✓        | [[https://kagi.com/settings?p=api][API 
key]]                    |
+| Azure                | ✓        | Deployment and API key     |
+| Groq                 | ✓        | [[https://console.groq.com/keys][API key]] 
                   |
+| Mistral Le Chat      | ✓        | [[https://console.mistral.ai/api-keys][API 
key]]                    |
+| Perplexity           | ✓        | 
[[https://docs.perplexity.ai/docs/getting-started][API key]]                    
|
+| OpenRouter           | ✓        | [[https://openrouter.ai/keys][API key]]    
                |
+| together.ai          | ✓        | 
[[https://api.together.xyz/settings/api-keys][API key]]                    |
+| Anyscale             | ✓        | 
[[https://docs.endpoints.anyscale.com/][API key]]                    |
+| PrivateGPT           | ✓        | 
[[https://github.com/zylon-ai/private-gpt#-documentation][PrivateGPT running 
locally]] |
+| DeepSeek             | ✓        | 
[[https://platform.deepseek.com/api_keys][API key]]                    |
+| Sambanova (Deepseek) | ✓        | [[https://cloud.sambanova.ai/apis][API 
key]]                    |
+| Cerebras             | ✓        | [[https://cloud.cerebras.ai/][API key]]    
                |
+| Github Models        | ✓        | 
[[https://github.com/settings/tokens][Token]]                      |
+| Novita AI            | ✓        | 
[[https://novita.ai/model-api/product/llm-api?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][Token]]
                      |
+| xAI                  | ✓        | 
[[https://console.x.ai?utm_source=github_gptel&utm_medium=github_readme&utm_campaign=link][API
 key]]                    |
+| Github CopilotChat   | ✓        | Github account             |
 #+html: </div>
 
 *General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube 
Demo]])
@@ -110,6 +111,7 @@ gptel uses Curl if available, but falls back to the 
built-in url-retrieve to wor
       - [[#openrouter][OpenRouter]]
       - [[#privategpt][PrivateGPT]]
       - [[#deepseek][DeepSeek]]
+      - [[#sambanova-deepseek][Sambanova (DeepSeek)]]
       - [[#cerebras][Cerebras]]
       - [[#github-models][Github Models]]
       - [[#novita-ai][Novita AI]]
@@ -748,6 +750,33 @@ The above code makes the backend available to select.  If 
you want it to be the
 
 #+html: </details>
 #+html: <details><summary>
+
+**** Sambanova (Deepseek)
+#+html: </summary>
+Sambanova offers various LLMs through their Samba Nova Cloud offering, with 
Deepseek-R1 being one of them. The token speed for Deepseek R1 via Sambanova is 
about 6 times faster than when accessed through deepseek.com 
+
+Register a backend with
+#+begin_src emacs-lisp
+(gptel-make-openai "Sambanova"        ;Any name you want
+  :host "api.sambanova.ai"
+  :endpoint "/v1/chat/completions"
+  :stream t                          ;for streaming responses
+  :key "your-api-key"               ;can be a function that returns the key
+  :models '(DeepSeek-R1))
+#+end_src
+
+You can pick this backend from the menu when using gptel (see 
[[#usage][Usage]]).
+
+***** (Optional) Set as the default gptel backend
+The code aboves makes the backend available for selection.  If you want it to 
be the default backend for gptel, you can set this as the value of 
=gptel-backend=.  Add these two lines to your configuration: 
+#+begin_src emacs-lisp
+;; OPTIONAL configuration
+  (setq gptel-model 'DeepSeek-R1)
+  (setq gptel-backend (gptel-get-backend "Sambanova"))
+#+end_src
+#+html: </details>
+#+html: <details><summary>
+
 **** Cerebras
 #+html: </summary>
 

Reply via email to