So it's upstream with kernel v6.6-rc6. With that it affects jammy, lunar and mantic. (Would have been ideal to have it tagged as upstream stable update.)
** Also affects: linux (Ubuntu Jammy) Importance: Undecided Status: New ** Also affects: linux (Ubuntu Mantic) Importance: Undecided Assignee: Skipper Bug Screeners (skipper-screen-team) Status: New ** Also affects: linux (Ubuntu Lunar) Importance: Undecided Status: New ** Also affects: ubuntu-z-systems Importance: Undecided Status: New ** Changed in: ubuntu-z-systems Assignee: (unassigned) => Skipper Bug Screeners (skipper-screen-team) ** Changed in: linux (Ubuntu Mantic) Assignee: Skipper Bug Screeners (skipper-screen-team) => (unassigned) -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/2039575 Title: [UBUNTU 22.04] SMC stats: Wrong bucket calculation for payload of exactly 4096 bytes. Status in Ubuntu on IBM z Systems: New Status in linux package in Ubuntu: New Status in linux source package in Jammy: New Status in linux source package in Lunar: New Status in linux source package in Mantic: New Bug description: Bug description by Nils H.: ------------ Overview: ------------ The following line in the SMC stats code (net/smc/smc_stats.h) caught my attention when using a payload of exactly 4096 bytes: #define SMC_STAT_PAYLOAD_SUB(_smc_stats, _tech, key, _len, _rc) \ do { \ typeof(_smc_stats) stats = (_smc_stats); \ typeof(_tech) t = (_tech); \ typeof(_len) l = (_len); \ int _pos = fls64((l) >> 13); \ typeof(_rc) r = (_rc); \ int m = SMC_BUF_MAX - 1; \ this_cpu_inc((*stats).smc[t].key ## _cnt); \ if (r <= 0) \ break; \ _pos = (_pos < m) ? ((l == 1 << (_pos + 12)) ? _pos - 1 : _pos) : m; \ <--- this_cpu_inc((*stats).smc[t].key ## _pd.buf[_pos]); \ this_cpu_add((*stats).smc[t].key ## _bytes, r); \ } \ while (0) With l = 4096, _pos evaluates to -1. Checking with the following uperf profile: # cat rr1c-4kx4k---1.xml <?xml version="1.0"?> <profile name="TCP_RR"> <group nprocs="1"> <!--group nthreads="1"--> <!-- if we want to run processes --> <!--group nprocs="1"--> <transaction iterations="1"> <flowop type="connect" options="remotehost=<remote IP> protocol=tcp tcp_nodelay" /> </transaction> <transaction iterations="1"> <flowop type="write" options="size=4096"/> <flowop type="read" options="size=4096"/> </transaction> <transaction iterations="1"> <flowop type="disconnect" /> </transaction> </group> </profile> smcd stats output: # smcd -d stats reset SMC-D Connections Summary Total connections handled 2 SMC connections 2 (client 2, server 0) v1 0 v2 2 Handshake errors 0 (client 0, server 0) Avg requests per SMC conn 14.0 TCP fallback 0 (client 0, server 0) RX Stats Data transmitted (Bytes) 5796 (5.796K) Total requests 9 Buffer full 0 (0.00%) Buffer downgrades 0 Buffer reuses 0 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB Bufs 0 0 0 2 0 0 0 1 Reqs 8 0 0 0 0 0 0 0 TX Stats Data transmitted (Bytes) 9960 (9.960K) Total requests 19 Buffer full 0 (0.00%) Buffer full (remote) 0 (0.00%) Buffer too small 0 (0.00%) Buffer too small (remote) 0 (0.00%) Buffer downgrades 0 Buffer reuses 0 8KB 16KB 32KB 64KB 128KB 256KB 512KB >512KB Bufs 0 2 0 0 0 0 0 0 Reqs 18 0 0 0 0 0 0 1 Extras Special socket calls 0 cork 0 nodelay 0 sendpage 0 splice 0 urgent data 0 Instead of including the payload in the wrong >512KB buckets, output should be to have 19 reqs in the 8KB buckets for TX stats and 9 reqs in the 8KB bucket for RX stats. -------- Repro: -------- 0. Install uperf. 1. Reset SMC-D stats on client and server. 2. Start uperf at server side: "uperf -vs". 3. Update profile with remote IP (server IP) and start uperf at client: "uperf -vai 5 -m rr1c-4kx4k---1.xml" (uperf profile, see above) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-z-systems/+bug/2039575/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp