On 2019-04-04 09:52, Dan Williams wrote:
On Thu, 2019-04-04 at 11:00 +0200, Johannes Berg wrote:
Hi,
> Thanks a lot for doing this! Being responsible for most of the
> issues
> you point out, I can only say that you have my full support if you
> want
> to change any of it.
:-)
> My pathetic excuses below are just meant to clarify why things are
> the
> way they are. They are not a defense for status quo ;-)
Thanks!
> > Here's the current things we seem to be doing:
> >
> > (1) Channels are created/encoded as VLANs (cdc_mbim)
> >
> > This is ... strange at best, it requires creating fake
> > ethernet
> > headers on the frames, just to be able to have a VLAN tag.
> > If you
> > could rely on VLAN acceleration it wouldn't be _so_ bad,
> > but of
> > course you can't, so you have to detect an in-band VLAN tag
> > and
> > decode/remove it, before taking the VLAN ID into the
> > virtual
> > channel number.
>
> No, the driver sets NETIF_F_HW_VLAN_CTAG_TX. There is no in-band
> VLAN
> tag for any normal use. The tag is taken directly from skb
> metadata and
> mapped to the appropriate MBIM session ID.
Right, I saw this.
> But this failed when cooking raw frames with an in-line tag using
> packet
> sockets, so I added a fallback to in-line tags for that use case.
But this still means that the fallback for in-line has to be
supported,
so you can't really fully rely on VLAN acceleration. Maybe my wording
here was incomplete, but I was aware of this.
Nevertheless, it means to replicate this in another driver you don't
just need the VLAN acceleration handling, but also the fallback, so
it's
a bunch of extra code.
> > Creating channels is hooked on VLAN operations, which is
> > about the
> > only thing that makes sense here?
>
> Well, that was why I did this, to avoid requiring som new set of
> userspace tools to manage these links. I looked for some existing
> tools
> for adding virtual netdevs, and I thought I could make VLANs fit
> the
> scheme.
Right.
> In hindsight, I should have created a new netlink based API for
> cellular
> modem virtual links instead. But I don't think it ever struck me
> as a
> choice I had at the time. I just wasn't experienced enough to
> realize
> how the Linux kernel APIs are developed ;-)
:-)
And likely really it wasn't all as fleshed out as today with the
plethora of virtual links supported. This seems fairly old.
> > (2) Channels are created using sysfs (qmi_wwan)
> >
> > This feels almost worse - channels are created using sysfs
> > and
> > just *bam* new netdev shows up, no networking APIs are used
> > to
> > create them at all, and I suppose you can't even query the
> > channel
> > ID for each netdev if you rename them or so. Actually,
> > maybe you
> > can in sysfs, not sure I understand the code fully.
>
> This time I was, and I tried to learn from the MBIM mistake. So I
> asked
> the users (ModemManager developers++), proposing a netlink API as a
> possible solution:
>
> https://lists.freedesktop.org/archives/libqmi-devel/2017-January/001900.html
>
> The options I presented were those I saw at the time: VLANs like
> cdc_mbim, a new netlink API, or sysfs. There wasn't much feedback,
> but
> sysfs "won". So this was a decision made by the users of the API,
> FWIW.
Fair point. Dan pointed out that no (default) userspace actually
exists
to do this though, and users kinda of have to do it manually - he
says
modem manager and libmbim all just use the default channel today. So
not
sure they really went on to become users of this ;-)
To be clear, ModemManager doesn't (yet) make use of multiple IP
channels. But libmbim supports it with 'mbimcli --connect="session-
id=4,apn=XXXXX"' and then you'd add VLAN 4 onto the mbim netdev and
theoretically things would work :) Bjorn would have the details
though.
libmbim really doesn't care about the extra netdevs or channels itself
since it doesn't care about the data plane (nor does it need to at this
time).
Dan
> > (3) Channels are created using a new link type (rmnet)
> >
> > To me this sort of feels the most natural, but this
> > particular
> > implementation has at least two issues:
> >
> > (a) The implementation is basically driver-specific now,
> > the link
> > type is called 'rmnet' etc.
> > (b) The bridge enslave thing there is awful.
>
Hi
The normal mode of operation of rmnet is using the rmnet netdevices
in an embedded device.
The bridge mode is used only for testing by sending frames
without de-muxing to some other driver such as a USB netdev so packets
can be parsed on a tethered PC.
> This driver showed up right after the sysfs based implementation in
> qmi_wwan. Too bad we didn't know about this work then. I don't
> think
> anyone would have been interested in the qmi_wwan sysfs thing if we
> had
> known about the plans for this driver. But what's done is done.
Sure.
I was planning to refactor qmi_wwan to reuse rmnet as much as possible.
Unfortunately, I wasn't able to get qmi_wwan / modem manager
configured and never got to this.
> > It seems to me that there really is space here for some common
> > framework, probably modelled on rmnet - that seems the most
> > reasonable
> > approach of all three.
> >
> > The only question I have there is whether the 'netdev model' they
> > all
> > have actually makes sense. What I mean by that is that they all
> > assume
> > they have a default channel (using untagged frames, initial
> > netdev,
> > initial netdev respectively for (1) - (3)).
>
> Good question. I guess the main argument for the 'netdev model' is
> that
> it makes the device directly usable with no extra setup or tools.
> Most
> users won't ever want or need more than one channel anyway. They
> use
> the modem for a single IP session.
You can do that with both models, really. I mean, with wifi we just
create a single virtual interface by default and you can then go and
use
it. But you can also *delete* it later because the underlying
abstraction ("wiphy") doesn't disappear.
This can't be done if you handle the new channel netdevs on top of
the
default channel netdev.
> There is a also an advantage for QMI/RMNET where you can drop the
> muxing
> header when using the default channel only.
That's pretty a pretty internal driver thing though:
if (tag == default) {
/* send the frame down without a header */
return;
}
no?
Having a single channel is not common so rmnet doesn't support a default
channel mode. Most of the uses cases we have require multiple PDNs so
this translates to multiple rmnet devices.
> > In 802.11, we don't have such a default channel - you can
> > add/remove
> > virtual netdevs on the fly. But if you want to do that, then you
> > can't
> > use IFLA_LINK and the normal link type, which means custom
> > netlink and
> > custom userspace etc. which, while we do it in wifi, is
> > bothersome.
>
> Yes, some of the feedback I've got from the embedded users is that
> they
> don't want any more custom userspace tools. But I'm sure you've
> heard
> that a few times too :-)
Not really, they have to run wpa_supplicant anyway and that handles
it
all, but I hear you.
> > Here I guess the question would be whether it makes sense to even
> > remove
> > the default channel, or retag it, or something like that. If no,
> > then to
> > me it all makes sense to just model rmnet. And even if it *is*
> > something
> > that could theoretically be done, it seems well possible to me
> > that the
> > benefits (using rtnl_link_register() etc.) outweigh the deficits
> > of the
> > approach.
> >
> >
> > I'm tempted to take a stab at breaking out rmnet_link_ops from
> > the rmnet
> > driver, somehow giving it an alias of 'wwan-channel' or something
> > like
> > that, and putting it into some sort of small infrastructure.
> >
> > Anyone else have any thoughts?
>
> I've added Aleksander (ModemManager) and Daniele (qmi_wwan muxing
> user
> and developer) to the CC list. They are the ones who wold end up
> using
> a possible new API, so they should definitely be part of the
> discussion
Agree, thanks!
johannes
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project