Hi Christian,
On Sun, Jul 13, 2025 at 08:45:02AM +0200, Christian Kastner wrote:
> Hi Salvatore,
>
> On 2025-07-13 07:49, Salvatore Bonaccorso wrote:
> > On Sat, Jul 12, 2025 at 12:04:34AM +0200, Christian Kastner wrote:
> >> Nevertheless, I really need to figure out a better way to deal with
> >
Hi Salvatore,
On 2025-07-13 07:49, Salvatore Bonaccorso wrote:
> On Sat, Jul 12, 2025 at 12:04:34AM +0200, Christian Kastner wrote:
>> Nevertheless, I really need to figure out a better way to deal with
>> llama.cpp, whisper.cpp, and ggml triad. Re-embedding isn't an option as
>> the ggml build is
Hi Christian,
On Sat, Jul 12, 2025 at 12:04:34AM +0200, Christian Kastner wrote:
> On 2025-07-11 21:19, Salvatore Bonaccorso wrote:
> > The following vulnerability was published for llama.cpp.
> >
> > CVE-2025-53630[0]:
> > | llama.cpp is an inference of several LLM models in C/C++. Integer
> > |
On 2025-07-11 21:19, Salvatore Bonaccorso wrote:
> The following vulnerability was published for llama.cpp.
>
> CVE-2025-53630[0]:
> | llama.cpp is an inference of several LLM models in C/C++. Integer
> | Overflow in the gguf_init_from_file_impl function in
> | ggml/src/gguf.cpp can lead to Heap O
Source: llama.cpp
Version: 5760+dfsg-4
Severity: grave
Tags: security upstream
X-Debbugs-Cc: car...@debian.org, Debian Security Team
Hi,
The following vulnerability was published for llama.cpp.
CVE-2025-53630[0]:
| llama.cpp is an inference of several LLM models in C/C++. Integer
| Overflow in
5 matches
Mail list logo