Hi Lennart,
I'll do some testing today with "BindCarrier=" and check if it covers all
usecases.
Regarding the "networkctl" update to show the UFD groups in a user friendly
fashion, what about that ?
Let's take a simple example.
If I have a configuration file like below:
#cat sw0p10.network
[Match]
Name=sw0p10
[Network]
BindCarrier=sw0p49 sw0p51 sw0p5 sw0p7
In the background, networkd will create & monitor an UFD group with ID, let's
say 1.
Then networkctl would give following output to the user:
# networkctl ufd
● UFD Group: 1
#
and
# networkctl ufd 1
● UFD Group: 1
State: configured
Uplinks:
→ 12: sw0p10
Downlinks:
→ 51: sw0p49
→ 53: sw0p51
→ 7: sw0p5
→ 9: sw0p7
#
Of course "networkd ufd -a" would also work.
How does it sound ?
Best Regards,
Alin
-----Original Message-----
From: Lennart Poettering [mailto:[email protected]]
Sent: Wednesday, January 28, 2015 6:59 PM
To: Rauta, Alin
Cc: Andrei Borzenkov; Tom Gundersen; Kinsella, Ray; systemd Mailing List
Subject: Re: [systemd-devel] [PATCH] Added UFD (Uplink failure detection)
support to networkd
On Wed, 28.01.15 17:18, Rauta, Alin ([email protected]) wrote:
> Hi Lennart, Tom,
>
> We should also be able to add virtual devices to UFD groups, like
> Andrei mentioned in his email. In this case, do you think
> "BindCarrier=" and "Tag=" in .network files would still work ?
Again, my latest proposal does away with the "Tag=" concept entirely.
I am not sure what a "virtual device" is supposed to be. If it has a linux
network interface, then it has a name and all I am saying is that with a simple
Concept like BindCarrier= taking a list of globs of interface names I think
that you can cover what you need.
> If we think about LAG (link aggregation) and if I am right, it's
> mapped to the kernel as a virtual device and contains multiple links.
> This way, it makes sense to have groups of links as netdevs. The only
> difference in case of UFD is that is not mapped to the kernel, but
> it's mapped inside networkd.
I networkd, there are:
1) network interfaces created automatically by some kernel driver,
because the hardware was discovered. To these we apply one .link
file via udev, plus maybe a .network file, when we actually use it
to connect to a network.
2) network interfaces that have to be created explicitly, via some
kernel API. These are configured via .netdev files. From the point
on they are created by networkd they are like any other network
interface, i.e. exactly like the ones described in #1, i.e. on top
of the .netdev file a .link file is then applied, and finally maybe
a .network file.
Now, all I am saying is that i think it would suffice if the .network files for
the downlinks for contain BindCarrier= globs referring to their respective
uplinks. And that should already suffice. TO make this work nciely all that is
necessary then is that the network interfaces get pretty names, either right
from the .netdev, or from the .link files.
> Another thing is that maybe later on we want to provide some
> properties for an UFD group, maybe to change to way we consider an
> uplink as failing. This would be easy if we have a netdev for the UFD
> group. Also, defining a netdev, we don't lose the identity of the
> feature nor we mask it.
But this could also be another setting of the .network file of that is applied
to the downlink. Example: in the .network file of the downlink we could have:
BindCarrier=foo[1-7]
BindCarrierMode=need-all
Or so, which could mean: bring the downlink up only if there's a carrier on all
network interfaces that match the glob "foo[1-7]". The default for
BindCarrierMode= would be "need-any" or so, which would mean, that the carrier
is propagated when at least one of the network interfaces has a carrier.
Wouldn't that cover your usecase?
Lennart
--
Lennart Poettering, Red Hat
_______________________________________________
systemd-devel mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/systemd-devel