This bug was fixed in the package linux-bluefield - 5.4.0-1040.44 --------------- linux-bluefield (5.4.0-1040.44) focal; urgency=medium
* focal/linux-bluefield: 5.4.0-1040.44 -proposed tracker (LP: #1978639) * fix ref leak when switching zones (LP: #1979009) - net/sched: act_ct: fix ref leak when switching zones * Fix XFRM flags validity check (LP: #1978967) - SAUCE: net/xfrm: Fix XFRM flags validity check [ Ubuntu: 5.4.0-121.137 ] * focal/linux: 5.4.0-121.137 -proposed tracker (LP: #1978666) * Packaging resync (LP: #1786013) - debian/dkms-versions -- update from kernel-versions (main/2022.05.30) * CVE-2022-28388 - can: usb_8dev: usb_8dev_start_xmit(): fix double dev_kfree_skb() in error path * test_vxlan_under_vrf.sh in net from ubuntu_kernel_selftests failed (Check VM connectivity through VXLAN (underlay in the default VRF) [FAIL]) (LP: #1871015) - selftests: net: test_vxlan_under_vrf: fix HV connectivity test * [UBUNTU 20.04] CPU-MF: add extended counter set definitions for new IBM z16 (LP: #1974433) - s390/cpumf: add new extended counter set for IBM z16 * [UBUNTU 20.04] KVM nesting support leaks too much memory, might result in stalls during cleanup (LP: #1974017) - KVM: s390: vsie/gmap: reduce gmap_rmap overhead * [UBUNTU 20.04] Null Pointer issue in nfs code running Ubuntu on IBM Z (LP: #1968096) - NFS: Fix up nfs_ctx_key_to_expire() [ Ubuntu: 5.4.0-120.136 ] * CVE-2022-21123 // CVE-2022-21125 // CVE-2022-21166 - cpu/speculation: Add prototype for cpu_show_srbds() - x86/cpu: Add Jasper Lake to Intel family - x86/cpu: Add Lakefield, Alder Lake and Rocket Lake models to the to Intel CPU family - x86/cpu: Add another Alder Lake CPU to the Intel family - Documentation: Add documentation for Processor MMIO Stale Data - x86/speculation/mmio: Enumerate Processor MMIO Stale Data bug - x86/speculation: Add a common function for MD_CLEAR mitigation update - x86/speculation/mmio: Add mitigation for Processor MMIO Stale Data - x86/bugs: Group MDS, TAA & Processor MMIO Stale Data mitigations - x86/speculation/mmio: Enable CPU Fill buffer clearing on idle - x86/speculation/mmio: Add sysfs reporting for Processor MMIO Stale Data - x86/speculation/srbds: Update SRBDS mitigation selection - x86/speculation/mmio: Reuse SRBDS mitigation for SBDS - KVM: x86/speculation: Disable Fill buffer clear within guests - x86/speculation/mmio: Print SMT warning -- Zachary Tahenakos <zachary.tahena...@canonical.com> Tue, 21 Jun 2022 13:59:23 -0400 ** Changed in: linux-bluefield (Ubuntu Focal) Status: Fix Committed => Fix Released ** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-21123 ** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-21125 ** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-21166 ** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2022-28388 -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux-bluefield in Ubuntu. https://bugs.launchpad.net/bugs/1979009 Title: fix ref leak when switching zones Status in linux-bluefield package in Ubuntu: Invalid Status in linux-bluefield source package in Focal: Fix Released Bug description: * Explain the bug(s) When switching zones or network namespaces without doing a ct clear in between, it is now leaking a reference to the old ct entry. That's because tcf_ct_skb_nfct_cached() returns false and tcf_ct_flow_table_lookup() may simply overwrite it. The fix is to, as the ct entry is not reusable, free it already at tcf_ct_skb_nfct_cached(). * brief explanation of fixes The fix is to, as the ct entry is not reusable, free it already at tcf_ct_skb_nfct_cached(). * How to test Setup ovs with ovs offload enabled on veth or other software only devices (so it will only be offloaded to TC and not also to HW which will take longer), example: function config_veth() { local ns=$1 local ip=$2 local peer=${ns}_peer local veth=${ns}_veth echo "Create namespace $ns, veths: hv $veth <-> ns $peer ($ip)" ip netns add $ns ip link del $veth &>/dev/null ip link add $veth type veth peer name $peer ip link set $veth up ip link set $peer netns $ns ip netns exec $ns ifconfig $peer $ip/24 mtu 1400 up } IP1="7.7.7.1" IP2="7.7.7.2" config_veth ns0 $IP1 config_veth ns1 $IP2 ovs-vsctl add-br ovs-br ovs-vsctl add-port ovs-br ns0_veth ovs-vsctl add-port ovs-br ns1_veth Add openflow rules configuring two or more chained zones, example: function configure_rules() { local orig_dev=$1 local reply_dev=$2 ovs-ofctl del-flows ovs-br ovs-ofctl add-flow ovs-br "table=0, arp, actions=normal" #ORIG ovs-ofctl add-flow ovs-br "table=0, ip,in_port=$orig_dev,ct_state=-trk, actions=ct(zone=5, table=5)" ovs-ofctl add-flow ovs-br "table=5, ip,in_port=$orig_dev,ct_state=+trk+new, actions=ct(zone=5, commit),ct(zone=7, table=7)" ovs-ofctl add-flow ovs-br "table=5, ip,in_port=$orig_dev,ct_state=+trk+est, actions=ct(zone=7, table=7)" ovs-ofctl add-flow ovs-br "table=7, ip,in_port=$orig_dev,ct_state=+trk+new, actions=ct(zone=7, commit),output:$reply_dev" ovs-ofctl add-flow ovs-br "table=7, ip,in_port=$orig_dev,ct_state=+trk+est, actions=output:$reply_dev" #REPLY ovs-ofctl add-flow ovs-br "table=0, ip,in_port=$reply_dev,ct_state=-trk,ip actions=ct(zone=7, table=8)" ovs-ofctl add-flow ovs-br "table=8, ip,in_port=$reply_dev,ct_state=+trk+est,ip actions=ct(zone=5, table=9)" ovs-ofctl add-flow ovs-br "table=9, ip,in_port=$reply_dev,ct_state=+trk+est,ip actions=output:$orig_dev" ovs-ofctl dump-flows ovs-br --color } configure_rules veth1 veth2 run udp/tcp traffic from veth1 to veth2 such that it will pass both zones in the resuling tc rules, and check conntrack dying table after ending traffic: conntrack -L dying If bug occurs, dying table won't be empty and will have entries with refcount > 0: tcp 6 0 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=47180 dport=6538 src=127.0.0.1 dst=127.0.0.1 sport=6538 dport=47180 ... mark=0 use=2 * What it could break. Reaching full conntrack table and then dropping packets To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux-bluefield/+bug/1979009/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp