Control: tags -1 + moreinfo Hi Hans-Christoph,
On Sun, Mar 02, 2025 at 07:12:54PM +0100, Hans-Christoph Steiner wrote: > Package: src:linux > Version: 6.12.9-1~bpo12+1 > Severity: important > > Dear Maintainer, > > * What led up to the situation? > > My laptop was suspended, I opened it up, browsed the web a bit, then started > to play a video on invidious. The computer totally froze, requiring a hard > restart (10 second press on the power button). > > * What outcome did you expect instead? > > I expect Linux to never freeze the whole computer. That has been true for > me before. > > -- Package-specific info: > ** Version: > Linux version 6.12.9+bpo-amd64 (debian-ker...@lists.debian.org) > (x86_64-linux-gnu-gcc-12 (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for > Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.12.9-1~bpo12+1 (2025-01-19) > > ** Command line: > BOOT_IMAGE=/vmlinuz-6.12.9+bpo-amd64 root=/dev/mapper/monolith--vg-root ro > quiet > > ** Not tainted > > Crash: > > 2025-03-02T18:41:46.857786+01:00 monolith kernel: [207477.911548] INFO: task > kworker/2:3:78475 blocked for more than 604 seconds. > 2025-03-02T18:41:46.857805+01:00 monolith kernel: [207477.911568] Not > tainted 6.12.9+bpo-amd64 #1 Debian 6.12.9-1~bpo12+1 > 2025-03-02T18:41:46.857807+01:00 monolith kernel: [207477.911574] "echo 0 > > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > 2025-03-02T18:41:46.857808+01:00 monolith kernel: [207477.911578] > task:kworker/2:3 state:D stack:0 pid:78475 tgid:78475 ppid:2 > flags:0x00004000 > 2025-03-02T18:41:46.857809+01:00 monolith kernel: [207477.911591] Workqueue: > events_long ucsi_resume_work [typec_ucsi] > 2025-03-02T18:41:46.857811+01:00 monolith kernel: [207477.911615] Call Trace: > 2025-03-02T18:41:46.857812+01:00 monolith kernel: [207477.911619] <TASK> > 2025-03-02T18:41:46.857813+01:00 monolith kernel: [207477.911626] > __schedule+0x403/0xbf0 > 2025-03-02T18:41:46.857817+01:00 monolith kernel: [207477.911644] > schedule+0x27/0xf0 > 2025-03-02T18:41:46.857818+01:00 monolith kernel: [207477.911653] > schedule_preempt_disabled+0x15/0x30 > 2025-03-02T18:41:46.857819+01:00 monolith kernel: [207477.911662] > __mutex_lock.constprop.0+0x34c/0x6a0 > 2025-03-02T18:41:46.857820+01:00 monolith kernel: [207477.911670] > ucsi_send_command_common+0x73/0x2b0 [typec_ucsi] > 2025-03-02T18:41:46.857822+01:00 monolith kernel: [207477.911682] > ucsi_resume_work+0x29/0xa0 [typec_ucsi] > 2025-03-02T18:41:46.857823+01:00 monolith kernel: [207477.911693] > process_one_work+0x179/0x390 > 2025-03-02T18:41:46.857824+01:00 monolith kernel: [207477.911705] > worker_thread+0x251/0x360 > 2025-03-02T18:41:46.857826+01:00 monolith kernel: [207477.911714] ? > __pfx_worker_thread+0x10/0x10 > 2025-03-02T18:41:46.857846+01:00 monolith kernel: [207477.911722] > kthread+0xcf/0x100 > 2025-03-02T18:41:46.857848+01:00 monolith kernel: [207477.911728] ? > __pfx_kthread+0x10/0x10 > 2025-03-02T18:41:46.857849+01:00 monolith kernel: [207477.911733] > ret_from_fork+0x31/0x50 > 2025-03-02T18:41:46.857850+01:00 monolith kernel: [207477.911741] ? > __pfx_kthread+0x10/0x10 > 2025-03-02T18:41:46.857851+01:00 monolith kernel: [207477.911746] > ret_from_fork_asm+0x1a/0x30 > 2025-03-02T18:41:46.857852+01:00 monolith kernel: [207477.911756] </TASK> > 2025-03-02T18:42:56.921423+01:00 monolith rtkit-daemon[1453]: Supervising 0 > threads of 0 processes of 0 users. > 2025-03-02T18:42:56.921802+01:00 monolith rtkit-daemon[1453]: Supervising 0 > threads of 0 processes of 0 users. > 2025-03-02T18:43:13.513080+01:00 monolith whatsapp.desktop[81112]: > [4:69:0302/184313.512258:ERROR:registration_request.cc(291)] Registration > response error message: DEPRECATED_ENDPOINT > 2025-03-02T18:43:47.689786+01:00 monolith kernel: [207598.742914] INFO: task > kworker/2:3:78475 blocked for more than 724 seconds. > 2025-03-02T18:43:47.689815+01:00 monolith kernel: [207598.742942] Not > tainted 6.12.9+bpo-amd64 #1 Debian 6.12.9-1~bpo12+1 > 2025-03-02T18:43:47.689818+01:00 monolith kernel: [207598.742952] "echo 0 > > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > 2025-03-02T18:43:47.689819+01:00 monolith kernel: [207598.742957] > task:kworker/2:3 state:D stack:0 pid:78475 tgid:78475 ppid:2 > flags:0x00004000 > 2025-03-02T18:43:47.689821+01:00 monolith kernel: [207598.742977] Workqueue: > events_long ucsi_resume_work [typec_ucsi] > 2025-03-02T18:43:47.689823+01:00 monolith kernel: [207598.743012] Call Trace: > 2025-03-02T18:43:47.689825+01:00 monolith kernel: [207598.743017] <TASK> > 2025-03-02T18:43:47.689827+01:00 monolith kernel: [207598.743029] > __schedule+0x403/0xbf0 > 2025-03-02T18:43:47.689832+01:00 monolith kernel: [207598.743056] > schedule+0x27/0xf0 > 2025-03-02T18:43:47.689834+01:00 monolith kernel: [207598.743069] > schedule_preempt_disabled+0x15/0x30 > 2025-03-02T18:43:47.689835+01:00 monolith kernel: [207598.743082] > __mutex_lock.constprop.0+0x34c/0x6a0 > 2025-03-02T18:43:47.689837+01:00 monolith kernel: [207598.743095] > ucsi_send_command_common+0x73/0x2b0 [typec_ucsi] > 2025-03-02T18:43:47.689838+01:00 monolith kernel: [207598.743113] > ucsi_resume_work+0x29/0xa0 [typec_ucsi] > 2025-03-02T18:43:47.689840+01:00 monolith kernel: [207598.743130] > process_one_work+0x179/0x390 > 2025-03-02T18:43:47.689842+01:00 monolith kernel: [207598.743148] > worker_thread+0x251/0x360 > 2025-03-02T18:43:47.689844+01:00 monolith kernel: [207598.743160] ? > __pfx_worker_thread+0x10/0x10 > 2025-03-02T18:43:47.689846+01:00 monolith kernel: [207598.743172] > kthread+0xcf/0x100 > 2025-03-02T18:43:47.689848+01:00 monolith kernel: [207598.743181] ? > __pfx_kthread+0x10/0x10 > 2025-03-02T18:43:47.689850+01:00 monolith kernel: [207598.743188] > ret_from_fork+0x31/0x50 > 2025-03-02T18:43:47.689877+01:00 monolith kernel: [207598.743199] ? > __pfx_kthread+0x10/0x10 > 2025-03-02T18:43:47.689879+01:00 monolith kernel: [207598.743204] > ret_from_fork_asm+0x1a/0x30 > 2025-03-02T18:43:47.689881+01:00 monolith kernel: [207598.743218] </TASK> 6.12.9 based kernel is superseeded already, so would like to ask if you can try to reproduce the issue first with newer kernels from the 6.12.y series. Ideally you could test it with 6.12.17 from unstable. Are you able to trigger the problem as with this version so we can try to report it to upstream? Regards, Salvatore