Bug#1109124: llama.cpp: CVE-2025-53630

2025-07-15 Thread Salvatore Bonaccorso
Hi Christian, On Sun, Jul 13, 2025 at 08:45:02AM +0200, Christian Kastner wrote: > Hi Salvatore, > > On 2025-07-13 07:49, Salvatore Bonaccorso wrote: > > On Sat, Jul 12, 2025 at 12:04:34AM +0200, Christian Kastner wrote: > >> Nevertheless, I really need to figure out a better way to deal with > >

Bug#1109124: llama.cpp: CVE-2025-53630

2025-07-12 Thread Christian Kastner
Hi Salvatore, On 2025-07-13 07:49, Salvatore Bonaccorso wrote: > On Sat, Jul 12, 2025 at 12:04:34AM +0200, Christian Kastner wrote: >> Nevertheless, I really need to figure out a better way to deal with >> llama.cpp, whisper.cpp, and ggml triad. Re-embedding isn't an option as >> the ggml build is

Bug#1109124: llama.cpp: CVE-2025-53630

2025-07-12 Thread Salvatore Bonaccorso
Hi Christian, On Sat, Jul 12, 2025 at 12:04:34AM +0200, Christian Kastner wrote: > On 2025-07-11 21:19, Salvatore Bonaccorso wrote: > > The following vulnerability was published for llama.cpp. > > > > CVE-2025-53630[0]: > > | llama.cpp is an inference of several LLM models in C/C++. Integer > > |

Bug#1109124: llama.cpp: CVE-2025-53630

2025-07-11 Thread Christian Kastner
On 2025-07-11 21:19, Salvatore Bonaccorso wrote: > The following vulnerability was published for llama.cpp. > > CVE-2025-53630[0]: > | llama.cpp is an inference of several LLM models in C/C++. Integer > | Overflow in the gguf_init_from_file_impl function in > | ggml/src/gguf.cpp can lead to Heap O

Bug#1109124: llama.cpp: CVE-2025-53630

2025-07-11 Thread Salvatore Bonaccorso
Source: llama.cpp Version: 5760+dfsg-4 Severity: grave Tags: security upstream X-Debbugs-Cc: car...@debian.org, Debian Security Team Hi, The following vulnerability was published for llama.cpp. CVE-2025-53630[0]: | llama.cpp is an inference of several LLM models in C/C++. Integer | Overflow in