Thank you for suggestions.
I had found the problem with GlusterFS and Linstor+DRBD. It was about MTU set on servers of value 9000, while the switches did not have jumbo frames
enabled. I've changed MTU on servers to standard 1500 and GlusterFS works pretty well. I suppose that Cepth would have same problem. And DRBD where
facing the problem also because of MTU.
DRBD is fastest of all.
Linstor on top of DRBD is a good one with extensive command line options. Linstor knows how to use ZFS and LVM to create volumes. A good status
overview. But Linstor is toward commercial and the free version is not up-to-date. Also the Linstor plugin for Docker gives a segfault and an unusable
Docker environment. In order to use Linstor+DRBD volumes mounted on several servers, one should also use GFS2 or OCFS2 file systems.
GlusterFS is vary simplistic to setup. It is a filesystem itself on top of volumes. But the status is not very intuitive to view. It will take some
time to figured out. But as you find it, it is simplistic. GlusterFS does not know about underlying hardware. You give a folder (which may be on
separate volume ie ZFS, LVM, Disk) formated with desired filesystem (zfs, ext3, ext4, gfs2, etc.). GlusterFS have a plugin for Docker which allows one
to create volumes as any others in Docker.
As for Ceph, it is way too complicated (make sense when using more than 5 servers for storages). Ceph have a good web interface do administer. Ceph
can not be easely used on top of ZFS or LVM. Ceph preffer raw devices. I would like to move to Ceph but with a good consistent step by step manual
which will give a working solution.
On 29.04.2023 01:26, Linux-Fan wrote:
Mimiko writes:
Hello.
I would want to use a shared storage with docker to store volume so swarm
containers can use the volumes from different docker nodes.
As for now I tried glusterFS and linstor+drbd.
GlusterFS has a bug with slow replication and slow volume status presentation or timing out. Also directory listing or data access is not reliable,
despite the nodes are connected with each other. So the GlusterFS didn't worked for me.
With linstor+drbd and linstor plugin for docker, the created volumes are
inconsistend on creation. Did not find a solution.
DRBD might still be a solid base to run upon. Consider using an alternative file system on top of it like e.g. GFS2 or OCFS2. The management tools to
assemble these kinds of cluster file systems can be found in packages
gfs2-utils and ocfs2-utils respectively. Back when I tried it, I found OCFS2
slightly easier to setup.
I once tested a Kubernetes + OCFS2 setup (backed by a real iSCSI target rather
than DRBD though) and it worked OK.
I want to try Ceth.
What other storage distributions did you tried and worked? It should have mirroring data on more than 2 nodes, representation as a bloch device to
mount in system, or as a network file system to be mounted using fstab, or using docker volume. Sould be HA and consistent with at least 3 nodes and
can automatically get back even if two of three nodes restarts, and be prune to sporadic network shortage.
I am not aware of any system that fulfils all of these requirements. Specifically, systems are typically either low-latency OR robust in event of
“sporadic network shortage” issues.
GlusterFS could be preferrable on this with there were not the bug with timeout.
I have never used GlusterFS, but from things I heard about it it very much seems as if this is not really a ”bug” but rather by design. I think that
the observed behaviour might also be similar if you were to use Ceph instead of GlusterFS, but if you find out that it works much better, I'd be
interested in learning about it :)
HTH and YMMV
Linux-Fan
öö