Core dump trace follows:

[New LWP 29244]
[New LWP 29241]
[New LWP 29245]
[New LWP 29243]
[New LWP 29242]
[New LWP 29246]
[New LWP 29247]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/sbin/named -u bind'.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50      ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
[Current thread is 1 (Thread 0xffffafda91a0 (LWP 29244))]
(gdb) thread apply all bt

Thread 7 (Thread 0xffffae5a61a0 (LWP 29247)):
#0  0x0000ffffb2ffbc00 in __GI_epoll_pwait (epfd=<optimized out>,
events=events@entry=0xffffb0db6010, maxevents=maxevents@entry=64,
    timeout=timeout@entry=-1, set=set@entry=0x0) at
../sysdeps/unix/sysv/linux/epoll_pwait.c:42
#1  0x0000ffffb2ffbd40 in epoll_wait (epfd=<optimized out>,
events=events@entry=0xffffb0db6010, maxevents=maxevents@entry=64,
    timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:32
#2  0x0000ffffb3471a9c in watcher (uap=0xffffb0db5010) at
../../../../lib/isc/unix/socket.c:4260
#3  0x0000ffffb335b7e4 in start_thread (arg=0xffffd7e8b09f) at
pthread_create.c:486
#4  0x0000ffffb2ffbadc in thread_start () at
../sysdeps/unix/sysv/linux/aarch64/clone.S:78

Thread 6 (Thread 0xffffaeda71a0 (LWP 29246)):
#0  futex_abstimed_wait_cancelable (private=0, abstime=0xffffaeda6858,
expected=0, futex_word=0xffffb0db30a8)
    at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  __pthread_cond_wait_common (abstime=<optimized out>,
mutex=<optimized out>, cond=<optimized out>) at pthread_cond_wait.c:539
#2  __pthread_cond_timedwait (cond=cond@entry=0xffffb0db3080,
mutex=mutex@entry=0xffffb0db3028, abstime=abstime@entry=0xffffaeda6858)
    at pthread_cond_wait.c:667
#3  0x0000ffffb347bc54 in isc_condition_waituntil
(c=c@entry=0xffffb0db3080, m=m@entry=0xffffb0db3028,
t=t@entry=0xffffb0db3074)
    at ../../../../lib/isc/pthreads/condition.c:59
#4  0x0000ffffb3463f44 in run (uap=0xffffb0db3010) at
../../../lib/isc/timer.c:806
#5  0x0000ffffb335b7e4 in start_thread (arg=0xffffd7e8b14f) at
pthread_create.c:486
#6  0x0000ffffb2ffbadc in thread_start () at
../sysdeps/unix/sysv/linux/aarch64/clone.S:78

Thread 5 (Thread 0xffffb0dab1a0 (LWP 29242)):
#0  0x0000ffffb36553c0 in ?? () from /lib/aarch64-linux-gnu/libcrypto.so.1.1
#1  0x0000ffffb0da6d30 in ?? ()
#2  0x94603695287e7065 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

Thread 4 (Thread 0xffffb05aa1a0 (LWP 29243)):
#0  futex_wait_cancelable (private=0, expected=0,
futex_word=0xffffb0db10d0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0xffffb0db1028,
cond=0xffffb0db10a8) at pthread_cond_wait.c:502
#2  __pthread_cond_wait (cond=cond@entry=0xffffb0db10a8,
mutex=mutex@entry=0xffffb0db1028) at pthread_cond_wait.c:655
#3  0x0000ffffb345dae4 in dispatch (manager=0xffffb0db1010) at
../../../lib/isc/task.c:1089
#4  run (uap=0xffffb0db1010) at ../../../lib/isc/task.c:1315
#5  0x0000ffffb335b7e4 in start_thread (arg=0xffffd7e8b10f) at
pthread_create.c:486
#6  0x0000ffffb2ffbadc in thread_start () at
../sysdeps/unix/sysv/linux/aarch64/clone.S:78

Thread 3 (Thread 0xffffaf5a81a0 (LWP 29245)):
#0  futex_wait_cancelable (private=0, expected=0,
futex_word=0xffffb0db10d0) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0xffffb0db1028,
cond=0xffffb0db10a8) at pthread_cond_wait.c:502
#2  __pthread_cond_wait (cond=cond@entry=0xffffb0db10a8,
mutex=mutex@entry=0xffffb0db1028) at pthread_cond_wait.c:655
#3  0x0000ffffb345dae4 in dispatch (manager=0xffffb0db1010) at
../../../lib/isc/task.c:1089
#4  run (uap=0xffffb0db1010) at ../../../lib/isc/task.c:1315
#5  0x0000ffffb335b7e4 in start_thread (arg=0xffffd7e8b10f) at
pthread_create.c:486
#6  0x0000ffffb2ffbadc in thread_start () at
../sysdeps/unix/sysv/linux/aarch64/clone.S:78

Thread 2 (Thread 0xffffb0e12440 (LWP 29241)):
#0  0x0000ffffb2f5ea9c in __GI___sigsuspend
(set=set@entry=0xffffd7e8b158) at ../sysdeps/unix/sysv/linux/sigsuspend.c:26
#1  0x0000ffffb34671dc in isc__app_ctxrun
(ctx0=ctx0@entry=0xffffb34ad6f0 <isc_g_appctx>) at
../../../../lib/isc/unix/app.c:725
#2  0x0000ffffb346746c in isc__app_run () at
../../../../lib/isc/unix/app.c:758
#3  0x0000ffffb3467e00 in isc_app_run () at
../../../../lib/isc/unix/../app_api.c:201
#4  0x0000aaaadda0c0c4 in main (argc=<optimized out>, argv=<optimized
out>) at ../../../bin/named/main.c:1480

Thread 1 (Thread 0xffffafda91a0 (LWP 29244)):
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x0000ffffb2f4c8e8 in __GI_abort () at abort.c:79
#2  0x0000aaaadda1fb58 in assertion_failed (file=<optimized out>,
line=<optimized out>, type=<optimized out>,
    cond=0xffffb3b1e398 "rbtdb->future_version == ((void *)0)") at
../../../bin/named/main.c:234
#3  0x0000ffffb3437944 in isc_assertion_failed
(file=file@entry=0xffffb3b1da10 "../../../lib/dns/rbtdb.c",
line=line@entry=1494,
    type=type@entry=isc_assertiontype_require,
cond=cond@entry=0xffffb3b1e398 "rbtdb->future_version == ((void *)0)")
    at ../../../lib/isc/assertions.c:51
#4  0x0000ffffb3a1368c in newversion (db=0xffffada1fc70,
versionp=0xffffafda87d0) at ../../../lib/dns/rbtdb.c:1546
#5  0x0000ffffb3ad4bd4 in setnsec3param (task=<optimized out>,
event=<optimized out>) at ../../../lib/dns/zone.c:19016
#6  0x0000ffffb345dca0 in dispatch (manager=0xffffb0db1010) at
../../../lib/isc/task.c:1143
#7  run (uap=0xffffb0db1010) at ../../../lib/isc/task.c:1315
#8  0x0000ffffb335b7e4 in start_thread (arg=0xffffd7e8b10f) at
pthread_create.c:486
#9  0x0000ffffb2ffbadc in thread_start () at
../sysdeps/unix/sysv/linux/aarch64/clone.S:78
(gdb)


On 9/3/19 1:39 PM, Ondřej Surý wrote:
> Nope, sorry, it’s here:
> 
> https://wiki.debian.org/AutomaticDebugPackages
> 
> …
> 
> e.g. adding:
> 
> deb http://deb.debian.org/debian-debug/ stable-debug main
> 
> should do the trick.
> 
> Ondrej
> --
> Ondřej Surý
> ond...@sury.org
> 
> 
> 
>> On 3 Sep 2019, at 15:37, Ondřej Surý <ond...@sury.org> wrote:
>>
>> I don’t know why it’s not available in the stable, but since we haven’t 
>> updated the package in the unstable yet, it should be identical to:
>>
>> https://packages.debian.org/unstable/bind9-dbgsym
>>
>> Ondrej
>> --
>> Ondřej Surý
>> ond...@sury.org
>>
>>
>>
>>> On 3 Sep 2019, at 15:19, Matt Corallo <li...@bluematt.me> wrote:
>>>
>>> I do have a core, but don't see what package to get debug symbols from?
>>>
>>> Matt
>>>
>>> On 9/3/19 12:27 PM, Ondřej Surý wrote:
>>>> Hi,
>>>>
>>>> could you please take a look if you have a core around and you could 
>>>> install debug symbols to decode the coredump?
>>>>
>>>> Ondrej
>>>> --
>>>> Ondřej Surý
>>>> ond...@sury.org
>>>>
>>>>
>>>>
>>>>> On 3 Sep 2019, at 13:51, li...@bluematt.me wrote:
>>>>>
>>>>> Package: bind9
>>>>> Version: 1:9.11.5.P4+dfsg-5.1
>>>>>
>>>>> Woke up this morning to the following in syslog. Looks to maybe be a
>>>>> race condition when reloading zones that have changed and setting
>>>>> nsec3params immediately after/during the reload.
>>>>>
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: zone bluematt.me/IN (unsigned):
>>>>> loaded serial 2015412479
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: received control channel
>>>>> command 'signing -nsec3param 1 0 100 D1D6B923 mattcorallo.com'
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: ../../../lib/dns/rbtdb.c:1494:
>>>>> REQUIRE(rbtdb->future_version == ((void *)0)) failed, back trace
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: #0 0xaaaadda1f958 in ??
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: #1 0xffffb3437944 in ??
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: #2 0xffffb3a1368c in ??
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: #3 0xffffb3ad4bd4 in ??
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: #4 0xffffb345dca0 in ??
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: #5 0xffffb335b7e4 in ??
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: #6 0xffffb2ffbadc in ??
>>>>> Sep  3 05:56:06 odroid-dns named[29241]: exiting (due to assertion 
>>>>> failure)
>>>>>
>>>>
>>
> 

Reply via email to