On Fri, 2019-04-12 at 16:27 +0200, Bjørn Mork wrote:
> Johannes Berg <johan...@sipsolutions.net> writes:
> > On Wed, 2019-04-10 at 21:54 -0600, Subash Abhinov Kasiviswanathan wrote:
> > 
> > > These packets will be processed as raw IP muxed frames on the PC as 
> > > well, not as ethernet though.
> > 
> > But in order to transport them, they're always encapsulated in ethernet?
> 
> No. There is no ethernet encapsulation of QMAP muxed frames over USB.
> 
> Same goes for MBIM, BTW. The cdc_mbim driver adds ethernet encapsulation
> on the host side, but that is just an unfortunate design decision based
> on the flawed assumption that ethernet interfaces are easier to relate
> to.

Yes yes, sorry. I snipped too much - we were talking here in the context
of capturing the QMAP muxed frames remotely to another system, in
particular the strange bridge mode rmnet has.

And I said up the thread:

> Yeah, I get it, it's just done in a strange way. You'd think adding a
> tcpdump or some small application that just resends the packets directly
> received from the underlying "real_dev" using a ptype_all socket would
> be sufficient? Though perhaps not quite the same performance, but then
> you could easily not use an application but a dev_add_pack() thing? Or
> probably even tc's mirred?
> 
> And to extend that thought, tc's ife action would let you encapsulate
> the things you have in ethernet headers... I think.

So basically yes, I know we (should) have only IP frames on the session
netdevs, and only QMAP-muxed frames on the underlying netdev (assuming
it exists), but I don't see the point in having kernel code to add
ethernet headers to send it over some other netdev - you can trivially
solve all of that in userspace or quite possibly even in the kernel with
existing (tc) infrastructure.

johannes

Reply via email to