branch: externals/minuet
commit 17661cdf826ecc779b232665a41ab3d6bf48db7c
Author: Milan Glacier <d...@milanglacier.com>
Commit: Milan Glacier <d...@milanglacier.com>

    doc: update README.
---
 README.md | 22 ++++++++++++----------
 prompt.md |  6 ++++--
 2 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/README.md b/README.md
index c747b7db79..bc9b62aef1 100644
--- a/README.md
+++ b/README.md
@@ -97,7 +97,9 @@ managers.
 
 ```
 
-Example for Ollama:
+**LLM Provider Examples**:
+
+**Ollama (`qwen-2.5-coder:3b`)**:
 
 <details>
 
@@ -119,7 +121,7 @@ Example for Ollama:
 
 </details>
 
-Example for Fireworks with `llama-3.3-70b` model:
+**Fireworks (`llama-3.3-70b`)**:
 
 <details>
 
@@ -178,8 +180,8 @@ Ollama using the `openai-fim-compatible` provider.
 
 Note: as of January 27, 2025, the high server demand from deepseek may
 significantly slow down the default provider used by Minuet
-(`openai-fim-compatible` with deepseek). We recommend trying
-alternative providers instead.
+(`openai-fim-compatible` with deepseek). We recommend trying alternative
+providers instead.
 
 # Prompt
 
@@ -273,7 +275,7 @@ You can customize the provider options using `plist-put`, 
for example:
 )
 ```
 
-To pass optional parameters (like `max_tokens` and `top_p`) to send to the REST
+To pass optional parameters (like `max_tokens` and `top_p`) to send to the curl
 request, you can use function `minuet-set-optional-options`:
 
 ```lisp
@@ -483,11 +485,11 @@ For example, you can set the `end_point` to
 <details>
 
 Additionally, for Ollama users, it is essential to verify whether the model's
-template supports FIM completion. For example,
-[qwen2.5-coder's 
template](https://ollama.com/library/qwen2.5-coder/blobs/e94a8ecb9327)
-is a supported model. However it may come as a surprise to some users that,
-`deepseek-coder` does not support the FIM template, and you should use
-`deepseek-coder-v2` instead.
+template supports FIM completion. For example, qwen2.5-coder offers FIM 
support,
+as suggested in its
+[template]((https://ollama.com/library/qwen2.5-coder/blobs/e94a8ecb9327).
+However it may come as a surprise to some users that, `deepseek-coder` does not
+support the FIM template, and you should use `deepseek-coder-v2` instead.
 
 The following config is the default.
 
diff --git a/prompt.md b/prompt.md
index 63e06975ce..12740910c9 100644
--- a/prompt.md
+++ b/prompt.md
@@ -107,8 +107,6 @@ def fibonacci(n):
 ## Default Chat Input Example
 
 The chat input represents the final prompt delivered to the LLM for completion.
-Its template follows a structured format similar to the system prompt and can 
be
-customized as follows:
 
 The chat input template follows a structure similar to the system prompt and 
can
 be customized using the following format:
@@ -137,6 +135,10 @@ plist containing the following values:
 - `:before-cursor`
 - `:after-cursor`
 - `:language-and-tab`
+- `:is-incomplete-before`: indicates whether the context before the cursor is
+  incomplete
+- `:is-incomplete-after`: indicates whether the context after the cursor is
+  incomplete
 
 ## Customization
 

Reply via email to