Public bug reported:

I suspect this is a kernel bug.

With ubuntu <= 21, I find that this runs in about 13 seconds:

python3 -c "import timeit; print(timeit.Timer('for _ in range(0,1000):
pass').timeit())"

With ubuntu >= 22, I find that it runs in about 83 seconds.

The problem seems to be specific to Cisco UCS hardware and can be mostly
mitigated by disabling hyperthreading.

I also tried counting to a million a thousand times instead of counting
to 1000 a million times (this is how many times timeit runs the
experiment), just in case the time-measuring was the slow part, but it
made no difference.  Even just a straight up loop without using timeit
shows about the same difference.

Originally, I encountered this when upgrading from 18 to 24.  We went
back and isolated the problem to something that changed between 21 and
22.  The version I actually care about is 24.

The only Cisco UCS systems we have are a bunch of Cisco UCS C220 M5SX
rack servers and a bunch of Cisco UCS B200 M5 blades.  All of them show
the regression.  I can confirm that on a variety of similarly-specced
supermicro systems, the regression does not occur.

The problem can be easily reproduced by booting off
https://releases.ubuntu.com/24.04.1/ubuntu-24.04.1-live-server-amd64.iso
(or various other versions) and dropping into a shell.  The installer
kernel behaves the same as the installed kernel across the various
versions.  So it should be possible for anyone with this hardware to
reproduce the issue by using the installer shells.  You may wish to use
an old python3 from a version-pinned docker image to get an apples-to-
apples comparison.

If I run the experiment inside ubuntu18 containers on ubuntu21 and
ubuntu22 I can see that I still get the dramatically different runtimes.
i.e., the kernel version and not the userland or python version is what
seems to matter.

We have tried mitigations=off with no effect.

We have tried reverting various kernel scheduler configuration changes
back to their ubuntu21 settings with no effect.

We have tried disabling hyperthreading in the BIOS.  This had an
enormous effect.  It reduces runtime from 83 seconds to 17 seconds.  17
is still 30% slower than 13, but it is obviously way better than 83.

So just to recap:
13s: ubuntu21 with hyperthreading on
83s: ubuntu22 with hyperthreading on
17s: ubuntu22 with hyperthreading off

This machine has 2 sockets with 20 physical cores each for a total of 80
logical cores once we account for hyperthreading.

Ideally I would prefer not to be forced to disable hyperthreading.  Even
if that is not possible, I am interested in avoiding the remaining 30%
slowdown.

sysbench --test=cpu and sysbench --test=memory also both exhibit a
slowdown, but it is more like a 30% slowdown instead of 800%, even with
hyperthreading turned on.

I have used perf to profile python and found the time was spread out--
did not see any particular smoking gun.  The python process makes < 300
syscalls over its entire lifetime and virtually no context switches.  I
tried running it with realtime priority with affinity for a single core,
which seemed to make little difference.  The python process uses 100% of
a cpu as it runs.

Any ideas?

** Affects: ubuntu
     Importance: Undecided
         Status: New

** Attachment added: "hwinfo.txt"
   https://bugs.launchpad.net/bugs/2083077/+attachment/5822644/+files/hwinfo.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2083077

Title:
  python3 counting 6x slowdown with ubuntu22 on cisco ucs hardware with
  hyperthreading

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2083077/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to