Package: src:dask.distributed
Version: 2022.12.1+ds.1-3
Severity: important
Tags: ftbfs

Dear maintainer:

During a rebuild of all packages in bookworm, our package failed to build:

--------------------------------------------------------------------------------
[...]
 debian/rules binary
dh binary --with python3,sphinxdoc --buildsystem=pybuild
   dh_update_autotools_config -O--buildsystem=pybuild
   dh_autoreconf -O--buildsystem=pybuild
   dh_auto_configure -O--buildsystem=pybuild
I: pybuild base:240: python3.11 setup.py config
running config
   debian/rules override_dh_auto_build
make[1]: Entering directory '/<<PKGBUILDDIR>>'
rm -f distributed/comm/tests/__init__.py
set -e; \
for p in distributed/http/static/js/anime.js 
distributed/http/static/js/reconnecting-websocket.js; do \
    uglifyjs -o $p debian/missing-sources/$p ; \
done

[... snipped ...]

../../../distributed/tests/test_worker_state_machine.py::test_throttling_incoming_transfer_on_transfer_bytes_different_workers
 PASSED [ 99%]
../../../distributed/tests/test_worker_state_machine.py::test_do_not_throttle_connections_while_below_threshold
 PASSED [ 99%]
../../../distributed/tests/test_worker_state_machine.py::test_throttle_on_transfer_bytes_regardless_of_threshold
 PASSED [ 99%]
../../../distributed/tests/test_worker_state_machine.py::test_worker_nbytes[executing]
 PASSED [ 99%]
../../../distributed/tests/test_worker_state_machine.py::test_worker_nbytes[long-running]
 PASSED [ 99%]
../../../distributed/tests/test_worker_state_machine.py::test_fetch_count 
PASSED [ 99%]
../../../distributed/tests/test_worker_state_machine.py::test_task_counts 
PASSED [ 99%]
../../../distributed/tests/test_worker_state_machine.py::test_task_counts_with_actors
 PASSED [100%]

=================================== FAILURES ===================================
_________________ test_do_not_block_event_loop_during_shutdown 
_________________

s = <Scheduler 'tcp://127.0.0.1:38957', workers: 0, cores: 0, tasks: 0>

    @gen_cluster(nthreads=[])
    async def 
test_do_not_block_event_loop_during_shutdown(s):
        loop = asyncio.get_running_loop()
        called_handler = threading.Event()
        block_handler = threading.Event()
    
        w = await Worker(s.address)
        executor = 
w.executors["default"]
    
        # The block wait must be smaller than the test timeout and smaller 
than the
        # default value for timeout in 
`Worker.close``
        async def 
block():
            def fn():
                called_handler.set()
                assert 
block_handler.wait(20)
    
            await loop.run_in_executor(executor, 
fn)
    
        async def 
set_future():
            while True:
                try:
                    await loop.run_in_executor(executor, sleep, 
0.1)
                except RuntimeError:  # 
executor has started shutting down
                    block_handler.set()
                    return
    
        async def 
close():
            called_handler.wait()
            # executor_wait is True by default but we want to be explicit 
here
            await 
w.close(executor_wait=True)
    
      await asyncio.gather(block(), close(), 
set_future())

../../../distributed/tests/test_worker.py:3672:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../distributed/tests/test_worker.py:3657: in block
    await loop.run_in_executor(executor, fn)
distributed/_concurrent_futures_thread.py:65: in run
    result = self.fn(*self.args, 
**self.kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def fn():
        called_handler.set()
      assert 
block_handler.wait(20)
E       assert False
E        +  where False = <bound method Event.wait of <threading.Event at 
0x7fdebd199810: unset>>(20)
E        +    where <bound method Event.wait of <threading.Event at 
0x7fdebd199810: unset>> = <threading.Event at 0x7fdebd199810: unset>.wait

../../../distributed/tests/test_worker.py:3655: AssertionError
----------------------------- Captured stdout call -----------------------------
Dumped cluster state to 
test_cluster_dump/test_do_not_block_event_loop_during_shutdown.yaml
----------------------------- Captured stderr call -----------------------------
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   Scheduler at:     
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   Scheduler at:     
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   Scheduler at:     
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   Scheduler at:     
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   Scheduler at:     
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   Scheduler at:     
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   dashboard at:        
   127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   dashboard at:        
   127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   dashboard at:        
   127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   dashboard at:        
   127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   dashboard at:        
   127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO -   dashboard at:        
   127.0.0.1:40905
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Start worker at:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Start worker at:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Start worker at:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Start worker at:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Start worker at:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Start worker at:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -          Listening to:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -          Listening to:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -          Listening to:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -          Listening to:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -          Listening to:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -          Listening to:    
  tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO -          dashboard at:    
        127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO -          dashboard at:    
        127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO -          dashboard at:    
        127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO -          dashboard at:    
        127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO -          dashboard at:    
        127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO -          dashboard at:    
        127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -               Threads:    
                      1
2024-11-09 21:18:38,128 - distributed.worker - INFO -               Threads:    
                      1
2024-11-09 21:18:38,128 - distributed.worker - INFO -               Threads:    
                      1
2024-11-09 21:18:38,128 - distributed.worker - INFO -               Threads:    
                      1
2024-11-09 21:18:38,128 - distributed.worker - INFO -               Threads:    
                      1
2024-11-09 21:18:38,128 - distributed.worker - INFO -               Threads:    
                      1
2024-11-09 21:18:38,128 - distributed.worker - INFO -                Memory:    
               3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO -                Memory:    
               3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO -                Memory:    
               3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO -                Memory:    
               3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO -                Memory:    
               3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO -                Memory:    
               3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Local Directory: 
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Local Directory: 
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Local Directory: 
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Local Directory: 
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Local Directory: 
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO -       Local Directory: 
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker 
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker 
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker 
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker 
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker 
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker 
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker 
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker 
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker 
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker 
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker 
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker 
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.worker - INFO -         Registered to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO -         Registered to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO -         Registered to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO -         Registered to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO -         Registered to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO -         Registered to:    
  tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO - 
-------------------------------------------------
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at 
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at 
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at 
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at 
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at 
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at 
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established 
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to 
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to 
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to 
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to 
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to 
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to 
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream' 
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream' 
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream' 
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream' 
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream' 
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream' 
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker 
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker 
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker 
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker 
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker 
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker 
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to 
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to 
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to 
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to 
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to 
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to 
tcp://127.0.0.1:36801
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all 
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all 
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all 
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all 
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all 
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all 
comms
------------------------------ Captured log call -------------------------------
INFO     distributed.scheduler:scheduler.py:1619 State start
INFO     distributed.scheduler:scheduler.py:3864   Scheduler at:     
tcp://127.0.0.1:38957
INFO     distributed.scheduler:scheduler.py:3866   dashboard at:       
    127.0.0.1:40905
INFO     distributed.worker:worker.py:1416       Start worker at:      
tcp://127.0.0.1:36801
INFO     distributed.worker:worker.py:1417          Listening to:      
tcp://127.0.0.1:36801
INFO     distributed.worker:worker.py:1422          dashboard at:      
      127.0.0.1:36225
INFO     distributed.worker:worker.py:1423 Waiting to connect to:      
tcp://127.0.0.1:38957
INFO     distributed.worker:worker.py:1424 
-------------------------------------------------
INFO     distributed.worker:worker.py:1425               Threads:      
                    1
INFO     distributed.worker:worker.py:1427                Memory:      
             3.71 GiB
INFO     distributed.worker:worker.py:1431       Local Directory: 
/tmp/dask-worker-space/worker-j8lu9vvc
INFO     distributed.worker:worker.py:1130 
-------------------------------------------------
INFO     distributed.scheduler:scheduler.py:4216 Register worker 
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
INFO     distributed.scheduler:scheduler.py:5434 Starting worker 
compute stream, tcp://127.0.0.1:36801
INFO     distributed.core:core.py:867 Starting established connection 
to tcp://127.0.0.1:45266
INFO     distributed.worker:worker.py:1199         Registered to:      
tcp://127.0.0.1:38957
INFO     distributed.worker:worker.py:1200 
-------------------------------------------------
INFO     distributed.worker:worker.py:1514 Stopping worker at 
tcp://127.0.0.1:36801. Reason: worker-close
INFO     distributed.core:core.py:867 Starting established connection 
to tcp://127.0.0.1:38957
INFO     distributed.core:core.py:877 Connection to 
tcp://127.0.0.1:38957 has been closed.
INFO     distributed.core:core.py:892 Received 'close-stream' from 
tcp://127.0.0.1:45266; closing.
INFO     distributed.scheduler:scheduler.py:4781 Remove worker 
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
INFO     distributed.core:core.py:1480 Removing comms to 
tcp://127.0.0.1:36801
INFO     distributed.scheduler:scheduler.py:4861 Lost all workers
INFO     distributed.scheduler:scheduler.py:3929 Scheduler closing...
INFO     distributed.scheduler:scheduler.py:3951 Scheduler closing all 
comms
============================= slowest 20 durations =============================
30.27s call     
distributed/tests/test_scheduler.py::test_forget_tasks_while_processing
20.06s call     
distributed/tests/test_worker.py::test_do_not_block_event_loop_during_shutdown
19.23s call     
distributed/tests/test_scheduler.py::test_failing_task_increments_suspicious
10.14s call     
distributed/tests/test_scheduler.py::test_log_tasks_during_restart
9.84s call     distributed/tests/test_utils_test.py::test_bare_cluster
9.56s call     distributed/tests/test_worker.py::test_tick_interval
8.48s call     distributed/tests/test_stress.py::test_cancel_stress_sync
7.52s call     
distributed/tests/test_scheduler.py::test_restart_nanny_timeout_exceeded
5.71s call     distributed/tests/test_stress.py::test_stress_scatter_death
5.38s call     
distributed/tests/test_steal.py::test_allow_tasks_stolen_before_first_completes
5.26s call     distributed/tests/test_steal.py::test_balance_with_longer_task
5.01s call     
distributed/tests/test_failed_workers.py::test_worker_doesnt_await_task_completion
4.89s call     distributed/tests/test_failed_workers.py::test_restart_sync
4.72s call     distributed/tests/test_scheduler.py::test_close_nanny
4.56s call     
distributed/tests/test_failed_workers.py::test_failing_worker_with_additional_replicas_on_cluster
4.31s call     
distributed/tests/test_worker.py::test_package_install_restarts_on_nanny
4.15s call     distributed/tests/test_worker.py::test_heartbeat_missing_restarts
4.12s call     distributed/tests/test_steal.py::test_restart
4.05s call     distributed/tests/test_steal.py::test_steal_twice
4.02s call     
distributed/tests/test_failed_workers.py::test_multiple_clients_restart
=========================== short test summary info 
============================
SKIPPED [1] ../../../distributed/tests/test_client.py:855: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:881: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:900: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:1758: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:2004: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:2598: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:2627: Use fast 
random selection now
SKIPPED [1] ../../../distributed/tests/test_client.py:3261: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:4549: Now prefer 
first-in-first-out
SKIPPED [1] ../../../distributed/tests/test_client.py:4715: could not 
import 'scipy': No module named 'scipy'
SKIPPED [1] ../../../distributed/tests/test_client.py:5963: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_client.py:6161: could not 
import 'bokeh.plotting': No module named 'bokeh'
SKIPPED [1] ../../../distributed/tests/test_client.py:6472: known 
intermittent failure
SKIPPED [1] ../../../distributed/tests/test_client.py:6556: could not 
import 'bokeh': No module named 'bokeh'
SKIPPED [1] ../../../distributed/tests/test_client.py:6607: On Py3.10+ 
semaphore._loop is not bound until .acquire() blocks
SKIPPED [1] ../../../distributed/tests/test_client.py:6627: On Py3.10+ 
semaphore._loop is not bound until .acquire() blocks
SKIPPED [1] ../../../distributed/tests/test_client.py:7020: could not 
import 'bokeh': No module named 'bokeh'
SKIPPED [1] ../../../distributed/tests/test_config.py:316: could not 
import 'uvloop': No module named 'uvloop'
SKIPPED [1] ../../../distributed/tests/test_core.py:955: could not 
import 'crick': No module named 'crick'
SKIPPED [1] ../../../distributed/tests/test_core.py:964: could not 
import 'crick': No module named 'crick'
SKIPPED [1] ../../../distributed/tests/test_counter.py:13: no crick 
library
SKIPPED [1] ../../../distributed/tests/test_dask_collections.py:193: 
could not import 'sparse': No module named 'sparse'
SKIPPED [2] ../../../distributed/tests/test_nanny.py:510: could not 
import 'ucp': No module named 'ucp'
SKIPPED [1] ../../../distributed/tests/test_profile.py:74: could not 
import 'stacktrace': No module named 'stacktrace'
SKIPPED [1] ../../../distributed/tests/test_queues.py:89: getting same 
client from main thread
SKIPPED [1] ../../../distributed/tests/test_resources.py:370: Skipped
SKIPPED [1] ../../../distributed/tests/test_resources.py:427: Should 
protect resource keys from optimization
SKIPPED [1] ../../../distributed/tests/test_resources.py:448: atop 
fusion seemed to break this
SKIPPED [1] ../../../distributed/tests/test_scheduler.py:262: Not 
relevant with queuing on; see https://github.com/dask/distributed/issues/7204
SKIPPED [1] ../../../distributed/tests/test_scheduler.py:2406: could 
not import 'bokeh': No module named 'bokeh'
SKIPPED [1] ../../../distributed/tests/test_steal.py:285: Skipped
SKIPPED [1] ../../../distributed/tests/test_steal.py:1284: executing 
heartbeats not considered yet
SKIPPED [1] ../../../distributed/tests/test_stress.py:194: 
unconditional skip
SKIPPED [1] ../../../distributed/tests/test_utils.py:141: could not 
import 'IPython': No module named 'IPython'
SKIPPED [1] ../../../distributed/tests/test_utils.py:331: could not 
import 'pyarrow': No module named 'pyarrow'
SKIPPED [1] ../../../distributed/tests/test_utils_test.py:145: This 
hangs on travis
SKIPPED [1] ../../../distributed/tests/test_worker.py:223: don't yet 
support uploading pyc files
SKIPPED [1] ../../../distributed/tests/test_worker.py:319: could not 
import 'crick': No module named 'crick'
SKIPPED [2] ../../../distributed/tests/test_worker.py:1475: could not 
import 'ucp': No module named 'ucp'
SKIPPED [1] ../../../distributed/tests/test_worker.py:2014: skip if we 
have elevated privileges
SKIPPED [1] ../../../distributed/tests/test_worker_memory.py:167: 
fails on 32-bit, is it asking for large memory?
XFAIL 
../../../distributed/tests/test_actor.py::test_linear_access - Tornado 
can pass things out of orderShould rely on sending small messages rather than 
rpc
XFAIL 
../../../distributed/tests/test_client.py::test_nested_prioritization - 
https://github.com/dask/dask/pull/6807
XFAIL 
../../../distributed/tests/test_client.py::test_annotations_survive_optimization
 - https://github.com/dask/dask/issues/7036
XFAIL 
../../../distributed/tests/test_nanny.py::test_no_unnecessary_imports_on_worker[pandas]
 - distributed#5723
XFAIL 
../../../distributed/tests/test_preload.py::test_client_preload_text - 
The preload argument to the client isn't supported yet
XFAIL 
../../../distributed/tests/test_preload.py::test_client_preload_click - 
The preload argument to the client isn't supported yet
XFAIL 
../../../distributed/tests/test_resources.py::test_collections_get[True]
 - don't track resources through optimization
XFAIL 
../../../distributed/tests/test_scheduler.py::test_rebalance_raises_missing_data3[True]
 - reason: Freeing keys and gathering data is using different
                   channels (stream vs explicit RPC). Therefore, the
                   partial-fail is very timing sensitive and subject to a race
                   condition. This test assumes that the data is freed before
                   the rebalance get_data requests come in but merely deleting
                   the futures is not sufficient to guarantee this
XFAIL 
../../../distributed/tests/test_utils_perf.py::test_gc_diagnosis_rss_win
 - flaky and re-fails on rerun
XFAIL 
../../../distributed/tests/test_utils_test.py::test_gen_test - Test 
should always fail to ensure the body of the test function was run
XFAIL 
../../../distributed/tests/test_utils_test.py::test_gen_test_legacy_implicit
 - Test should always fail to ensure the body of the test function was run
XFAIL 
../../../distributed/tests/test_utils_test.py::test_gen_test_legacy_explicit
 - Test should always fail to ensure the body of the test function was run
XFAIL 
../../../distributed/tests/test_worker.py::test_share_communication - 
very high flakiness
XFAIL 
../../../distributed/tests/test_worker.py::test_dont_overlap_communications_to_same_worker
 - very high flakiness
XFAIL 
../../../distributed/tests/test_worker_memory.py::test_workerstate_fail_to_pickle_flight
 - https://github.com/dask/distributed/issues/6705
XFAIL 
../../../distributed/tests/test_worker_state_machine.py::test_gather_dep_failure
 - https://github.com/dask/distributed/issues/6705
FAILED 
../../../distributed/tests/test_worker.py::test_do_not_block_event_loop_during_shutdown
 - assert False
= 1 failed, 2121 passed, 43 skipped, 
127 deselected, 16 xfailed, 8 xpassed, 4 
rerun in 1073.21s (0:17:53) =
E: pybuild pybuild:388: test: plugin distutils failed with: exit code=1: cd 
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.11_distributed/build; python3.11 -m pytest 
/<<PKGBUILDDIR>>/distributed/tests -v --ignore=distributed/deploy/utils_test.py 
--ignore=distributed/utils_test.py --ignore=continuous_integration --ignore=docs --ignore=.github --timeout-method=signal 
--timeout=300 -m "not (avoid_ci or isinstalled or slow)" -k "not ( test_reconnect or test_jupyter_server or 
test_stack_overflow or test_pause_while_spilling or test_digests or test_dashboard_host or test_runspec_regression_sync or 
test_popen_timeout or test_runspec_regression_sync or test_client_async_before_loop_starts or 
test_plugin_internal_exception or test_client_async_before_loop_starts or test_web_preload or test_web_preload_worker or 
test_bandwidth_clear or test_include_communication_in_occupancy or test_worker_start_exception or 
test_task_state_instance_are_garbage_collected or test_spillbuffer_oserror or test_release_retry or test_timeout_zero  
)"
dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.11 
returned exit code 13
make[1]: *** [debian/rules:76: override_dh_auto_test] Error 25
make[1]: Leaving directory '/<<PKGBUILDDIR>>'
make: *** [debian/rules:42: binary] Error 2
dpkg-buildpackage: error: debian/rules binary subprocess returned exit status 2
--------------------------------------------------------------------------------

I've put the full build log here:

https://people.debian.org/~sanvila/build-logs/bookworm/

Note: I'm going to disable the test myself, using a very specific "skipif"
which checks the number of CPUs.

Ideally, this should also be forwarded upstream, but in debian/patches I see
some changes in timeout values which might affect the outcome of the test,
so we should be sure that it's not our fault before forwarding the issue.

Thanks.

Reply via email to