branch: externals/ellama
commit fb01a0cefe4ae813f141d255ce5afeae573968e2
Merge: ad88edf9ed 4bb90fcbd6
Author: Sergey Kostyaev <s-kosty...@users.noreply.github.com>
Commit: GitHub <nore...@github.com>

    Merge pull request #241 from s-kostyaev/improve-readme
    
    Add simple configuration example to readme
---
 README.org | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/README.org b/README.org
index 205216a3dc..bca8fb52a7 100644
--- a/README.org
+++ b/README.org
@@ -31,6 +31,19 @@ You can use ~ellama~ with other model or other llm provider.
 Without any configuration, the first available ollama model will be used.
 You can customize ellama configuration like this:
 
+#+BEGIN_SRC  emacs-lisp
+  (use-package ellama
+    :ensure t
+    :bind ("C-c e" . ellama-transient-main-menu)
+    :config
+    ;; show ellama context in header line in all buffers
+    (ellama-context-header-line-global-mode +1)
+    ;; send last message in chat buffer with C-c C-c
+    (add-hook 'org-ctrl-c-ctrl-c-final-hook #'ellama-chat-send-last-message))
+#+END_SRC
+
+More sofisticated configuration example:
+
 #+BEGIN_SRC  emacs-lisp
   (use-package ellama
     :ensure t
@@ -99,7 +112,7 @@ You can customize ellama configuration like this:
     ;; show ellama context in header line in all buffers
     (ellama-context-header-line-global-mode +1)
     ;; send last message in chat buffer with C-c C-c
-    (add-hook 'org-ctrl-c-ctrl-c-hook #'ellama-chat-send-last-message))
+    (add-hook 'org-ctrl-c-ctrl-c-final-hook #'ellama-chat-send-last-message))
 #+END_SRC
 
 ** Commands
@@ -393,8 +406,7 @@ The following variables can be customized for the Ellama 
client:
 - ~ellama-assistant-nick~: The assistant nick in logs.
 - ~ellama-language~: The language for Ollama translation. Default
 language is english.
-- ~ellama-provider~: llm provider for ellama. Default provider is
-~ollama~ with [[https://ollama.ai/library/zephyr][zephyr]] model.
+- ~ellama-provider~: llm provider for ellama.
 There are many supported providers: ~ollama~, ~open ai~, ~vertex~,
 ~GPT4All~. For more information see 
[[https://elpa.gnu.org/packages/llm.html][llm documentation]].
 - ~ellama-providers~: association list of model llm providers with

Reply via email to