> Which cloud is in use and what the instance type is?
This was seen on OVH. I don't think it's a public instance type. I
have attached the cpuid of the affected guest
(https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1973839/+attachment/5590542/+files/cpuid)
but unfortunately I don't detail
** Summary changed:
- 5.15.0-30-generic : unchecked MSR access error: WRMSR to 0x48 (tried to
write 0x0004)
+ 5.15.0-30-generic : SSBD mitigation results in "unchecked MSR access error:
WRMSR to 0x48 (tried to write 0x0004)" and flood of kernel traces
in some cloud pro
So after reading and experimenting a bit more, what the upstream change
is doing is setting the defaults to
spec_store_bypass_disable=prctl
spectre_v2_user=prctl
instead of "seccomp". This basically means that instead of all
seccomp() users setting these flags, it is up to userspace to set
manu
I have bisected this, and the commit that *fixes* this between the focal
kernel (5.15.0-30-generic) and the current 5.17 release is
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2f46993d83ff4abb310ef7b4beced56ba96f0d9d
x86: change default to spec_store_bypass_disa
I've made this confirmed, because the log collection (apport-collect
1973839) is hundreds of megabytes, as dmesg is full of the tracebacks
discussed
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Public bug reported:
When booting this in one of our clouds, we see an error early in the
kernel output
kernel: unchecked MSR access error: WRMSR to 0x48 (tried to write
0x0004) at rIP: 0xabc90af4
(native_write_msr+0x4/0x20)
and then an un-ending stream of "bare" tracebacks
For anyone else finding this and wondering what the implications are;
from what I could find:
It seems that 2.31-0ubuntu9.3 was released for
https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1914044 but as
noted in
https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1914044/comments/18
"This f
Public bug reported:
On our hosts we are seeing
---
$ sudo apt-get autoremove
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
1 not fully installed or removed.
After this operation, 0 B
** Summary changed:
- mysqldump --all-databases not dumping with 5.7.33
+ mysqldump --all-databases not dumping any databases with 5.7.33
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914695
Title:
Public bug reported:
Since the 5.7.33 upgrade, our Xenial host, talking to an old server that
reports itself as 5.1.73-1+deb6u1 is no long dumping all databases with
"--all-databases". It connects, exits with 0 and puts out some info,
but the actual databases are not dumped at all.
I ran strace
It seems the same thing has happened with Focal shipping openafs
1.8.4~pre1
Again, 1.8.5 builds and works (we don't have it in production yet, but
passing our build and functional testing) and would be a better choice.
We won't run with ~pre versions after our issues with them last time.
[1] http
Public bug reported:
I'm sure it wasn't so much a decision to ship this version in Bionic as
an artefact of freeze dates, etc, but 1.8.0~pre5 is seemingly not a very
good place to be. In OpenDev infrastructure we have noticed unfortunate
behaviour like serving corrupt files and then holding onto
Public bug reported:
This is a bug stating the rather obvious that zypper is unavailable on
bionic
This causes a bit of an issue for diskimage-builder [1] building
opensuse-minimal images on bionic hosts. The way we do this is to use
zypper on the build host to install everything into a chroot.
There is a patch series out that addresses aarch64 support [1]. I have
managed to build this against the current 1.8 series debian rules file
and there are some packages available at
deb http://tarballs.openstack.org/package-afs-aarch64/ ./
which so far I am using successfully. With the patche
(i just sent this to the list, but putting here too)
While I agree that a coredump is not that likely to help, I would also
like to come to that conclusion after inspecting a coredump :) I've
found things in the heap before that give clues as to what real
problems are.
To this end, I've proposed
valgrind would be great, but is the 100-pound gorilla approach. I'll
play with maybe some lighter-weight things like electric fence which
could give us some insight. something like that is going to segfault so
we cores seem a top priority. I'm probably more optimistic about
general usefulness of
Public bug reported:
We have seen consistent but infrequent segfaults of apache on a trusty
production server with 2.4.7-1ubuntu4.13 (for more examples, see [1])
---
Oct 2 19:01:03 static kernel: [8029151.932468] apache2[10642]: segfault at
7fac797803a8 ip 7fac90b345e0 sp 7fac84ff8e20 e
Public bug reported:
If you try and specify the starting offset for the partition in sectors
and ask sfdisk to use the rest of the disk (with "+"), sfdisk will
incorrectly calculate the end cylinders and not create the partition.
For example
---
# dd if=/dev/zero of=/tmp/disk.img bs=1M count=10
18 matches
Mail list logo