Hello, I am deploying 10< bare metal servers to serve as hosts for containers managed through LXD. As the number of container grows, management of inter-container running on different hosts becomes difficult to manage and need to be streamlined.
The goal is to setup a 192.168.0.0/24 network over which containers could communicate regardless of their host. The solutions I looked at [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of bridge.driver: openvswitch configuration for LXD. Note: baremetal servers are hosted on different physical networks and use of multicast was ruled out. An illustration of the goal architecture is similar to the image visible on https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots= 6aJRw15HSf&pg=PA197#v=onepage&q&f=false Note: this extract is from a book about LXC, not LXD. The point that is not clear is - whether each container needs to have as many veth as there are baremetal host, in which case [de]commission of a new baremetal would require configuration updated of all existing containers (and basically rule out this scenario) - or whether it is possible to "hide" this mesh network at the host level and have a single veth inside each container to access the private network and communicate with all the other containers regardless of their physical location and regardeless of the number of physical peers Has anyone built such a setup? Does the OVS+GRE setup need to be build prior to LXD init or can LXD automate part of the setup? Online documentation is scarce on the topic so any help would be appreciated. Regards, Amaury [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/ [2] https://stackoverflow.com/questions/39094971/want-to-use -the-vlan-feature-of-openvswitch-with-lxd-lxc [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged- networking-on-ubuntu-16-04-lts/
_______________________________________________ lxc-users mailing list [email protected] http://lists.linuxcontainers.org/listinfo/lxc-users
