Dear Maintainer,

we're experiencing very similar behaviour on our end and cups-filters > 1.13 
does not appear to fix the issue for us. For testing purposes we upgraded the 
packages on our Stretch server to their Buster counterparts:

We're currently running:
cups                            2.2.1-8+deb9u1
cups-bsd                        2.2.1-8+deb9u1
cups-client                     2.2.1-8+deb9u1
cups-common                     2.2.1-8+deb9u1
cups-core-drivers               2.2.1-8+deb9u1
cups-daemon                     2.2.1-8+deb9u1
cups-filters                    1.20.3-1+b1
cups-filters-core-drivers       1.20.3-1+b1
cups-ipp-utils                  2.2.1-8+deb9u1
cups-pdf                        2.6.1-22
cups-ppdc                       2.2.1-8+deb9u1
cups-server-common              2.2.1-8+deb9u1
cupsddk                         1.5.3-5+deb7u6

The cupsd process remains at 100% CPU usage.

strace (scrolling through rapidly):
poll([{fd=19, events=POLLIN}], 1, 0)    = 1 ([{fd=19, revents=POLLIN}])
recv(21, "\247", 1, MSG_PEEK)           = 1
poll([{fd=21, events=POLLIN}], 1, 0)    = 1 ([{fd=21, revents=POLLIN}])
epoll_wait(5, [{EPOLLIN, {u32=1468342600, u64=83072721224}}, {EPOLLIN, 
{u32=1468196936, u64=91662510152}}], 4096, 1000) = 2
recv(19, "\303", 1, MSG_PEEK)           = 1
poll([{fd=19, events=POLLIN}], 1, 0)    = 1 ([{fd=19, revents=POLLIN}])
recv(21, "\247", 1, MSG_PEEK)           = 1
poll([{fd=21, events=POLLIN}], 1, 0)    = 1 ([{fd=21, revents=POLLIN}])
epoll_wait(5, [{EPOLLIN, {u32=1468342600, u64=83072721224}}, {EPOLLIN, 
{u32=1468196936, u64=91662510152}}], 4096, 1000) = 2

The file descriptors correspond to the following sockets:
cupsd     32123                   root   19u     IPv4           33857634       
0t0        TCP iserv.dev2.iserv.eu:ipp->Dogmeat.dev2.iserv.eu:43696 (CLOSE_WAIT)
cupsd     32123                   root   21u     IPv4           33865820       
0t0        TCP iserv.dev2.iserv.eu:ipp->Dogmeat.dev2.iserv.eu:43706 (CLOSE_WAIT)

tshark doesn't show any network traffic on these sockets.

Reply via email to