Source: llama.cpp
Version: 5760+dfsg-4
Severity: grave
Tags: security upstream
X-Debbugs-Cc: car...@debian.org, Debian Security Team <t...@security.debian.org>

Hi,

The following vulnerability was published for llama.cpp.

CVE-2025-53630[0]:
| llama.cpp is an inference of several LLM models in C/C++. Integer
| Overflow in the gguf_init_from_file_impl function in
| ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This
| vulnerability is fixed in commit
| 26a48ad699d50b6268900062661bd22f3e792579.


If you fix the vulnerability please also make sure to include the
CVE (Common Vulnerabilities & Exposures) id in your changelog entry.

For further information see:

[0] https://security-tracker.debian.org/tracker/CVE-2025-53630
    https://www.cve.org/CVERecord?id=CVE-2025-53630
[1] 
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-vgg9-87g3-85w8
[2] 
https://github.com/ggml-org/llama.cpp/commit/26a48ad699d50b6268900062661bd22f3e792579

Please adjust the affected versions in the BTS as needed.

Regards,
Salvatore

Reply via email to