On 10/10/2023 11:25, Luci Stanescu wrote:
Hi Simon,

On 10 Oct 2023, at 00:17, Simon Kelley <[email protected]> wrote:

I've implemented option 1 here and it's currently running and dogfood on my home network. There are no VRF interfaces there: this is a test mainly to check that nothing breaks. So far, so good.

The patch I used is attached. It would be interesting to see if it solves the problem for you.

Many thanks for this! I can confirm that it works as expected with VRF-enslaved interfaces now.

Excellent. I've elaborated the patch slightly so that it logs when doing the fixup. If it turns out that there are cases where it's doing that inappropriately, the log will make it easier to see what's going on.

The patch is in the git public repo master branch now, so if anyone on the lists starts seeing "Working around kernel bug....." messages, please reply here ASAP.


2. Finding authoritative informationĀ that the interface index from IPV6_PKTINFO is always set to the L3 interface on which a datagram was received. The kernel mailing list might be start? I'd certainly be happy to help think about and test various scenarios.

Please enquire about 2.

I've tested chains of bond- and bridge-enslaved interfaces (e.g. veth in bond in bridge in bond) andĀ ipi6_ifindex seems to be set to the highest-up master, excluding VRF devices, so that seems promising and should cover the empirical bit. Joining a multicast group on an enslaved interface (if the master isn't a VRF) doesn't seem to work anyway.

I'll ask on the netdev kernel mailing list and see if I can get any assurances, but I'll have to wait for my DMARC record to expire first.


Thanks for that.


Simon.

Cheers,
Luci

--
Luci Stanescu
Information Security Consultant


_______________________________________________
Dnsmasq-discuss mailing list
[email protected]
https://lists.thekelleys.org.uk/cgi-bin/mailman/listinfo/dnsmasq-discuss

Reply via email to