Hi Ben,
nice to meet you and welcome to the case. All I said is true and VETH case was 
already covered by your predecessor Daniel back in 2023 (as the problem 
vanished on my side 2024 due to unknown reasons).

As to MACVLAN case, which seems to be missing here in sample provided, I do so 
now:

peterg@debian:~/Labs$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode 
DEFAULT group default qlen 1000
    link/ether 00:50:56:01:01:01 brd ff:ff:ff:ff:ff:ff
4: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode 
DEFAULT group default qlen 1000
    link/ether 00:50:56:01:01:02 brd ff:ff:ff:ff:ff:ff
5: ens256: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode 
DEFAULT group default qlen 1000
    link/ether 00:50:56:01:01:03 brd ff:ff:ff:ff:ff:ff
6: ixia@ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
UP mode DEFAULT group default qlen 1000
    link/ether 00:50:56:01:01:02 brd ff:ff:ff:ff:ff:ff
7: dmz1@ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
UP mode DEFAULT group default qlen 1000
    link/ether 00:50:56:01:01:02 brd ff:ff:ff:ff:ff:ff
26: inet@ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
UP mode DEFAULT group default qlen 1000
    link/ether 00:50:56:01:01:02 brd ff:ff:ff:ff:ff:ff
peterg@debian:~/Labs$ ip net
proxy (id: 11)
inet (id: 8)
sniffer (id: 10)
s7 (id: 6)
s6 (id: 5)
s5 (id: 4)
s4 (id: 3)
s3 (id: 2)
s2 (id: 1)
s1 (id: 0)
peterg@debian:~/Labs$ more inet_setup
#!/bin/bash

ip link add link ens224 name inet type vlan id 199
ip link set dev inet up

ip netns add inet
ip link add link inet name lab_inet type macvlan mode private
ip link set lab_inet netns inet
ip -n inet link set dev lo up
ip -n inet link set dev lab_inet up
ip -n inet link set address 00:50:56:01:01:21 dev lab_inet
ip -n inet addr add 70.0.0.254/24 dev lab_inet
ip -n inet route add default via 70.0.0.253 dev lab_inet
ip -n inet route add 172.17.0.0/24 via 70.0.0.1

peterg@debian:~/Labs$ ip -n inet link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
27: lab_inet@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
state UP mode DEFAULT group default qlen 1000
    link/ether 00:50:56:01:01:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0

Again, port-VLAN based config does not exhibit any inbound traffic reflection 
and stays perfectly silent on its unicast sessions.

HTH, BR
Peter


-----Original Message-----
From: Ben Hutchings <b...@decadent.org.uk> 
Sent: Wednesday, March 26, 2025 7:21 PM
To: GASPAROVIC Peter OBS/MKT <peter.gasparo...@orange.com>
Cc: 1054...@bugs.debian.org; Bastian Blank <wa...@debian.org>
Subject: Re: RE: Bug#1054642: Failing ARP relay from external -> Linux bridge 
-> veth port --> NS veth port

On Wed, 26 Mar 2025 13:52:25 +0000 peter.gasparo...@orange.com wrote:
> Hello,
> Thanks for commnent. I then wonder how you can get lost in my short
communication, in contrast to ton of material you ever read to get a pro.

So far you have talked about namespaces veth and macvlan devices and vSwitch 
and nowhere have you actually explained how these are actually connected 
together and configured.

In the Google doc you linked earlier you showed connections between VMs and 
vSwitches, but nothing about the virtual devices on each VM.

> I truly don't think why minimal VETH/MACVLAN config shall reflect the
inbound traffic for anybody to see (in ESXI environment at least) -- where is 
it documented please?

I think there's been some confusion here.  Bastian (and I) thought you were 
talking about the usual VEPA mode of macvlan, where macvlan devices attached to 
the same underlying device are not bridged and all packets transmitted on a 
macvlan device are then forwarded to the underlying devices.

But on re-reading it seems like you are saying that packets received on the 
external interface of the VM are being forwarded back out of that same 
interface.  I would agree this is unexpected behaviour, but until we see your 
actual configuration it's impossible to know whether this is a bug or 
misconfiguration.

Ben.

--
Ben Hutchings
Everything should be made as simple as possible, but not simpler.
                                                      - Albert Einstein
____________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

Reply via email to