Your message dated Fri, 11 Jul 2025 23:43:40 +0200
with message-id <2816cc40ebc15aa0b878fffaa8b8c...@kvr.at>
and subject line Re: Bug#1109124: llama.cpp: CVE-2025-53630
has caused the Debian Bug report #1109124,
regarding llama.cpp: CVE-2025-53630
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
1109124: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1109124
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Source: llama.cpp
Version: 5760+dfsg-4
Severity: grave
Tags: security upstream
X-Debbugs-Cc: car...@debian.org, Debian Security Team <t...@security.debian.org>

Hi,

The following vulnerability was published for llama.cpp.

CVE-2025-53630[0]:
| llama.cpp is an inference of several LLM models in C/C++. Integer
| Overflow in the gguf_init_from_file_impl function in
| ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This
| vulnerability is fixed in commit
| 26a48ad699d50b6268900062661bd22f3e792579.


If you fix the vulnerability please also make sure to include the
CVE (Common Vulnerabilities & Exposures) id in your changelog entry.

For further information see:

[0] https://security-tracker.debian.org/tracker/CVE-2025-53630
    https://www.cve.org/CVERecord?id=CVE-2025-53630
[1] 
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-vgg9-87g3-85w8
[2] 
https://github.com/ggml-org/llama.cpp/commit/26a48ad699d50b6268900062661bd22f3e792579

Please adjust the affected versions in the BTS as needed.

Regards,
Salvatore

--- End Message ---
--- Begin Message ---
Version: 0.0~git20250711.b6d2ebd-1

On 2025-07-11 21:19, Salvatore Bonaccorso wrote:
> Source: llama.cpp
> Version: 5760+dfsg-4
> Severity: grave
> Tags: security upstream
> X-Debbugs-Cc: car...@debian.org, Debian Security Team 
> <t...@security.debian.org>
> 
> Hi,
> 
> The following vulnerability was published for llama.cpp.
> 
> CVE-2025-53630[0]:
> | llama.cpp is an inference of several LLM models in C/C++. Integer
> | Overflow in the gguf_init_from_file_impl function in
> | ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This
> | vulnerability is fixed in commit
> | 26a48ad699d50b6268900062661bd22f3e792579.
> 
> 
> If you fix the vulnerability please also make sure to include the
> CVE (Common Vulnerabilities & Exposures) id in your changelog entry.
> 
> For further information see:
> 
> [0] https://security-tracker.debian.org/tracker/CVE-2025-53630
>     https://www.cve.org/CVERecord?id=CVE-2025-53630
> [1] 
> https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-vgg9-87g3-85w8
> [2] 
> https://github.com/ggml-org/llama.cpp/commit/26a48ad699d50b6268900062661bd22f3e792579
> 
> Please adjust the affected versions in the BTS as needed.
> 
> Regards,
> Salvatore

-- 
Christian Kastner

--- End Message ---

Reply via email to