On Thu, 19 Aug 2021 22:54:41 +0800, Shengjing Zhu wrote:

>It works for me. Here are my docker.service file and cgroup mount
>info. Could you compare the output?

> ====>docker.service<=====

My ExecStart= line includes --storage-driver=overlay, which is needed to avoid
filesystem failures in my containers.  (I remember using overlay2 in the past,
but after running into a bug, I reverted to the older overlay driver.)

My PATH= line is the only other difference. Using yours instead does not fix the
error. (My PATH matches the output of `systemd-path search-binaries-default`.)

Other than that, our docker.service files are the same.


> ====>cgroup<====

Our `mount|grep cgroup` output matches exactly.


> ====>systemctl<====

Our systemctl status top sections have minor differences:
- Your rootlesskit and /proc/self/exe command lines look truncated.
- My dockerd command line includes --storage-driver=overlay, of course.
- CPU time, pids, and paths are different, of course.


Our systemctl status log messages are quite different.
Starting docker without my workaround yields these messages:

level=warning msg="Unable to find cpu controller"
level=warning msg="Unable to find io controller"
level=warning msg="Unable to find cpuset controller"
level=info msg="Loading containers: start."
level=warning msg="Running modprobe bridge br_netfilter failed with message: 
modprobe: ERROR: could not insert 'br_netfilter': Operation not 
permitted\ninsmod /lib/modules/5.10.0-8-arm64/kernel/net/bridge/br_netfilter.ko 
\n, error: exit status 1"
level=info msg="Default bridge (docker0) is assigned with an IP address 
172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
level=info msg="Loading containers: done."
level=info msg="Docker daemon" commit=363e9a8 graphdriver(s)=overlay 
version=20.10.5+dfsg1
level=info msg="Daemon has completed initialization"
level=info msg="API listen on /run/user/[UID]/docker.sock"

Attempting to run a container without my workaround yields these messages:

level=info msg="starting signal loop" namespace=moby 
path=/run/.ro138154868/user/[UID]/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/[CONTAINERID]
 pid=[PID]
level=info msg="shim disconnected" id=[CONTAINERID]
level=error msg="stream copy error: reading from a closed fifo"
level=error msg="stream copy error: reading from a closed fifo"
level=error msg="[CONTAINERID] cleanup: failed to delete container from 
containerd: no such container"
level=error msg="Handler for POST /v1.41/containers/[CONTAINERID]/start 
returned error: OCI runtime create failed: container_linux.go:367: starting 
container process caused: process_linux.go:340: applying cgroup configuration 
for process caused: read unix @->/run/systemd/private: read: connection reset 
by peer: unknown"


> ====>docker info<====

My `docker info` output has several differences from yours:

Context:    default
Images: 9
Storage Driver: overlay
 Backing Filesystem: extfs
 Supports d_type: true
Kernel Version: 5.10.0-8-arm64
Architecture: aarch64

Mine also includes some host details at the end (which I assume you
deliberately skipped) and these warnings:

WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: Support for cgroup v2 is experimental
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: the overlay storage-driver is deprecated, and will be removed in a 
future release.


> I think the difference may be the arch, as I'm testing it on amd64.
> Not sure if it's an arm64 specific kernel issue.

Maybe; I don't know much about cgroups. Do they behave differently on
arm64 vs. amd64?

I notice that you're running a kernel package one version behind mine.

Reply via email to