This is an automated email from the ASF dual-hosted git repository.

traky pushed a commit to branch 3.13-docs-patch
in repository https://gitbox.apache.org/repos/asf/apisix.git

commit 39bbaee6412e03bf2a47e83ff46907fe59e9cc99
Author: Traky Deng <[email protected]>
AuthorDate: Fri Aug 8 13:59:03 2025 +0800

    remove example in ai-proxy.md
---
 docs/en/latest/plugins/ai-proxy.md | 68 --------------------------------------
 1 file changed, 68 deletions(-)

diff --git a/docs/en/latest/plugins/ai-proxy.md 
b/docs/en/latest/plugins/ai-proxy.md
index 239c6df5d..06c1427f6 100644
--- a/docs/en/latest/plugins/ai-proxy.md
+++ b/docs/en/latest/plugins/ai-proxy.md
@@ -383,71 +383,3 @@ You should receive a response similar to the following:
   }
 }
 ```
-
-### Include LLM Information in Access Log
-
-The following example demonstrates how you can log LLM request related 
information in the gateway's access log to improve analytics and audit. The 
following variables are available:
-
-* `request_type`: Type of request, where the value could be 
`traditional_http`, `ai_chat`, or `ai_stream`.
-* `llm_time_to_first_token`: Duration from request sending to the first token 
received from the LLM service, in milliseconds.
-* `llm_model`: LLM model.
-* `llm_prompt_tokens`: Number of tokens in the prompt.
-* `llm_completion_tokens`: Number of chat completion tokens in the prompt.
-
-:::note
-
-The usage will become available in APISIX 3.13.0.
-
-:::
-
-Update the access log format in your configuration file to include additional 
LLM related variables:
-
-```yaml title="conf/config.yaml"
-nginx_config:
-  http:
-    access_log_format: "$remote_addr   $remote_user [$time_local] $http_host 
\"$request_line\" $status $body_bytes_sent $request_time \"$http_referer\" 
\"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time 
\"$upstream_scheme://$upstream_host$upstream_uri\" \"$apisix_request_id\" 
\"$request_type\" \"$llm_time_to_first_token\" \"$llm_model\" 
\"$llm_prompt_tokens\" \"$llm_completion_tokens\""
-```
-
-Reload APISIX for configuration changes to take effect.
-
-Now if you create a Route and send a request following the [Proxy to OpenAI 
example](#proxy-to-openai), you should receive a response similar to the 
following:
-
-```json
-{
-  ...,
-  "model": "gpt-4-0613",
-  "choices": [
-    {
-      "index": 0,
-      "message": {
-        "role": "assistant",
-        "content": "1+1 equals 2.",
-        "refusal": null,
-        "annotations": []
-      },
-      "logprobs": null,
-      "finish_reason": "stop"
-    }
-  ],
-  "usage": {
-    "prompt_tokens": 23,
-    "completion_tokens": 8,
-    "total_tokens": 31,
-    "prompt_tokens_details": {
-      "cached_tokens": 0,
-      "audio_tokens": 0
-    },
-    ...
-  },
-  "service_tier": "default",
-  "system_fingerprint": null
-}
-```
-
-In the gateway's access log, you should see a log entry similar to the 
following:
-
-```text
-192.168.215.1   - [21/Mar/2025:04:28:03 +0000] api.openai.com "POST /anything 
HTTP/1.1" 200 804 2.858 "-" "curl/8.6.0"   -   "http://api.openai.com"; 
"5c5e0b95f8d303cb81e4dc456a4b12d9" "ai_chat" "2858" "gpt-4" "23" "8"
-```
-
-The access log entry shows the request type is `ai_chat`, time to first token 
is `2858` milliseconds, LLM model is `gpt-4`, prompt token usage is `23`, and 
completion token usage is `8`.

Reply via email to