Your message dated Fri, 26 Dec 2025 18:27:19 +0100
with message-id <[email protected]>
and subject line Re: Bug#1123651: llama-server: Crash with SIGABRT on assertion
(`system_regions_fine_.size() > 0')
has caused the Debian Bug report #1123651,
regarding llama-server: Crash with SIGABRT on assertion
(`system_regions_fine_.size() > 0')
to be marked as done.
This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.
(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact [email protected]
immediately.)
--
1123651: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1123651
Debian Bug Tracking System
Contact [email protected] with problems
--- Begin Message ---
Package: llama.cpp-tools
Version: 5882+dfsg-3
Dear Maintainer,
The llama-server program now crashes hwere it used to succeed. This is
the message reported when trying to start it:
% llama-server -ngl 256 -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -m
../models-llama/DeepSeek-R1-Distill-Qwen-14B-Q8_0.gguf
load_backend: loaded BLAS backend from
/usr/lib/x86_64-linux-gnu/ggml/backends0/libggml-blas.so
ROCm calling rocblas_initialize as a workaround for a rocBLAS bug
llama-server: ./src/core/runtime/runtime.cpp:198: void
rocr::core::Runtime::RegisterAgent(rocr::core::Agent*, bool): Assertion
`system_regions_fine_.size() > 0' failed.
Abort (SIGABRT) llama-server -ngl 256 -c 4096 --temp 0.7
--repeat_penalty 1.1 -n -1 -m
../models-llama/DeepSeek-R1-Distill-Qwen-14B-Q8_0.gguf
%
I have not tested other models.
-- System Information:
Debian Release: forky/sid
APT prefers testing
APT policy: (500, 'testing'), (1, 'experimental')
Architecture: amd64 (x86_64)
Kernel: Linux 6.17.11+deb14-amd64 (SMP w/32 CPU threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE,
TAINT_UNSIGNED_MODULE
Locale: LANG=nb_NO.UTF-8, LC_CTYPE=nb_NO.UTF-8 (charmap=UTF-8),
LANGUAGE=nb_NO:nb:no_NO:no:nn_NO:nn:da:sv:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
Versions of packages llama.cpp-tools depends on:
ii libc6 2.42-5
ii libcurl4t64 8.17.0-3
ii libgcc-s1 15.2.0-11
ii libggml0 0.0~git20250712.d62df60-5
ii libllama0 5882+dfsg-3
ii libstdc++6 15.2.0-11
llama.cpp-tools recommends no packages.
llama.cpp-tools suggests no packages.
-- no debconf information
--- End Message ---
--- Begin Message ---
Version: 6641+dfsg-2
I built the sid edition of rocblas, hipblas, ggml and llama.cpp in
testing, and managed to get llama-server working. I had to manually
edit baseURL in
/usr/share/llama.cpp-tools/llama-server/themes/simplechat/simplechat.js
to change the 127.0.0.1 address to work with my reverse tunnel. Perhaps
the Javascript code can be adjusted to detect a more appropriate baseURL
based on the URL of the page?
-- System Information:
Debian Release: forky/sid
APT prefers testing
APT policy: (500, 'testing')
Architecture: amd64 (x86_64)
Kernel: Linux 6.17.11+deb14-amd64 (SMP w/32 CPU threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE,
TAINT_UNSIGNED_MODULE
Locale: LANG=nb_NO.UTF-8, LC_CTYPE=nb_NO.UTF-8 (charmap=UTF-8),
LANGUAGE=nb_NO:nb:no_NO:no:nn_NO:nn:da:sv:en
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
Versions of packages llama.cpp depends on:
ii llama.cpp-tools 6641+dfsg-2
Versions of packages llama.cpp recommends:
ii llama.cpp-tools-extra 6641+dfsg-2
ii python3-gguf 6641+dfsg-2
Versions of packages llama.cpp suggests:
pn llama.cpp-examples <none>
-- no debconf information
--- End Message ---