Re: New requirements for APT repository signing

2024-03-01 Thread Julian Andres Klode
On Fri, Mar 01, 2024 at 01:02:38AM +0100, Salvo Tomaselli wrote:
> > Any other keys will cause warnings. These warnings will become
> > errors in March as we harden it up for the Ubuntu 24.04 release
> 
> Perhaps the announcement should have been sent earlier than 28th Feb then. Or 
> is there a mistake and they will become errors at a later date?

There is no mistake, but the ubuntu folks had more heads up due
to an email on ubuntu-devel, and internal meetings at Canonical.

In any case, as the email says in many fewer words:

for Debian this doesn't take effect now unless we
ship the patch in gnupg in 2.4 (either merging my commit to backport
to 2.4.4 or when 2.4.5 lands) which is in experimental; unstable
is still tracking 2.2 and might for the rest of the year.
-- 
debian developer - deb.li/jak | jak-linux.org - free software dev
ubuntu core developer  i speak de, en


signature.asc
Description: PGP signature


Re: New requirements for APT repository signing

2024-03-01 Thread Julian Andres Klode
On Thu, Feb 29, 2024 at 12:29:40AM +, Phil Wyett wrote:
> On Wed, 2024-02-28 at 20:20 +0100, Julian Andres Klode wrote:
> > APT 2.7.13 just landed in unstable and with GnuPG 2.4.5 installed,
> > or 2.4.4 with a backport from the 2.4 branch, requires repositories
> > to be signed using one of
> > 
> > - RSA keys of at least 2048 bit
> > - Ed25519
> > - Ed448
> > 
> > Any other keys will cause warnings. These warnings will become
> > errors in March as we harden it up for the Ubuntu 24.04 release,
> > which was the main driver to do the change *now*.
> > 
> > If you operate third-party repositories using different key
> > algorithms, now is your time to migrate before you get hit
> > with an error.
> > 
> > For the Ubuntu perspective, feel free to check out the discourse
> > post:
> > 
> > https://discourse.ubuntu.com/t/new-requirements-for-apt-repository-signing-in-24-04/42854
> 
> Hi,
> 
> Could I be pointed to the public conversation, any plans or bug reports 
> related to this
> update and transition etc. for affected users?


Some more information are in the GnuPG feature request:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1042391 (July 2023)
https://dev.gnupg.org/T6946 (Jan 2024)

Original announcement at

https://lists.ubuntu.com/archives/ubuntu-devel/2024-January/042883.html

Since then revised after rounds of feedback on internal specifications
and meetings.

Not sure what transition you are looking for, that's up for you
repository owners to figure out.
-- 
debian developer - deb.li/jak | jak-linux.org - free software dev
ubuntu core developer  i speak de, en


signature.asc
Description: PGP signature


Bug#1065171: ITP: aiooui -- Asynchronous OUI lookups in Python

2024-03-01 Thread Edward Betts
Package: wnpp
Severity: wishlist
Owner: Edward Betts 
X-Debbugs-Cc: debian-devel@lists.debian.org, debian-pyt...@lists.debian.org

* Package name: aiooui
  Version : 0.1.5
  Upstream Author : J. Nick Koston 
* URL : https://github.com/bluetooth-devices/aiooui
* License : MIT
  Programming Lang: Python
  Description : Asynchronous OUI lookups in Python

  aiooui offers an asynchronous approach to perform Organisationally Unique
  Identifier (OUI) lookups, enabling efficient identification of vendors based
  on MAC addresses in Python applications. This module is particularly useful
  for developers working on networking tools, security applications, or any
  project that requires vendor identification from MAC addresses. The package
  supports asynchronous programming models, making it suitable for use in modern
  Python asynchronous frameworks.

I plan to maintain this package as part of the Python team.



Any way to install packages+run autopkgtests on porterbox machines?

2024-03-01 Thread Nilesh Patra
Hi,

When I want to fix autopkgtests for a package on a particular architecture, I 
currently
see no way to run autopkgtests before I dput since porter boxes do not provide 
root
access which autopkgtest needs.

Currently I am manually hacking around the test scripts and running the 
autopkgtests but
this does not emulate the autopkgtest environment well enough. It also does not 
work
well for daemon-like packages for instance.

Additionally, say, I have a package which FTBFS due to something broken in a 
build dependency
on a particular architecture.
If I fixup the problem in the build-dependency, there is no way I could test if 
the target
package really works on that arch since I do not see a way to install the fixed 
builddep without
uploading it to the archive.

Have you found any way around these?

Best,
Nilesh


signature.asc
Description: PGP signature


Re: Any way to install packages+run autopkgtests on porterbox machines?

2024-03-01 Thread Andrey Rahmatullin
On Fri, Mar 01, 2024 at 06:28:50PM +0530, Nilesh Patra wrote:
> Hi,
> 
> When I want to fix autopkgtests for a package on a particular architecture, I 
> currently
> see no way to run autopkgtests before I dput since porter boxes do not 
> provide root
> access which autopkgtest needs.
> 
> Currently I am manually hacking around the test scripts and running the 
> autopkgtests but
> this does not emulate the autopkgtest environment well enough. It also does 
> not work
> well for daemon-like packages for instance.
> 
> Additionally, say, I have a package which FTBFS due to something broken in a 
> build dependency
> on a particular architecture.
> If I fixup the problem in the build-dependency, there is no way I could test 
> if the target
> package really works on that arch since I do not see a way to install the 
> fixed builddep without
> uploading it to the archive.
> 
> Have you found any way around these?
You can use local sbuild chroots for foreign architectures, both for
building and, I assume, running autopkgtests.



-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Any way to install packages+run autopkgtests on porterbox machines?

2024-03-01 Thread David Bremner
Nilesh Patra  writes:

> When I want to fix autopkgtests for a package on a particular architecture, I 
> currently
> see no way to run autopkgtests before I dput since porter boxes do not 
> provide root
> access which autopkgtest needs.
>
> Currently I am manually hacking around the test scripts and running the 
> autopkgtests but
> this does not emulate the autopkgtest environment well enough. It also does 
> not work
> well for daemon-like packages for instance.

Related, we wouldn't need to use the porterboxes if the
situation for running autopkgtests locally was better.

I have complained at length on IRC on the difficulties of running
autopkgtests locally on non-amd64 architectures. There is some tooling
to build images for e.g. the qemu backend, but in my experience it does
not work smoothly. I think the autopkgtest maintainers could use help
with improving this tooling.  Personally I am reluctant to add non-amd64
autopkgtests to packages with the current state of tooling. I do not
consider "upload and find out" an acceptable debugging strategy. 



Re: Any way to install packages+run autopkgtests on porterbox machines?

2024-03-01 Thread Nilesh Patra
On Fri, Mar 01, 2024 at 06:03:16PM +0500, Andrey Rahmatullin wrote:
> You can use local sbuild chroots for foreign architectures, both for
> building and, I assume, running autopkgtests.

I know but that is not something I want. This invaidates the whole point of 
using
porter machines.

Best,
Nilesh


signature.asc
Description: PGP signature


Re: Any way to install packages+run autopkgtests on porterbox machines?

2024-03-01 Thread Andrey Rahmatullin
On Fri, Mar 01, 2024 at 07:15:16PM +0530, Nilesh Patra wrote:
> > You can use local sbuild chroots for foreign architectures, both for
> > building and, I assume, running autopkgtests.
> 
> I know but that is not something I want. This invaidates the whole point of 
> using
> porter machines.
I though the whole point of using porter machines is being able to run at
least something on architectures you don't have otherwise. Local chroots
are superior to that, not inferior, when they are also available..


-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: Bug#1065022: libglib2.0-0t64: t64 transition breaks the systems

2024-03-01 Thread Helmut Grohne
On Thu, Feb 29, 2024 at 06:53:56AM +0100, Paul Gevers wrote:
> Well, officially downgrading isn't supported (although it typically works)
> *and* losing files is one of the problems of our merged-/usr solution (see
> [1]). I *suspect* this might be the cause. We're working hard (well, helmut
> is) to protect us and our users from loosing files on upgrades. We don't
> protect against downgrades.

As much as we like blaming all lost files on the /usr-move, this is not
one them. If you are downgrading from to a package that formerly has
been replaced, you always have lost files and you always had to
reinstall the package.

While t64 has quite some interactions with the /usr-move, I am closely
monitoring the situation have have been filing multiple bugs per day
recently about the relevant peculiarities. I don't think any of the
fallout we see here is reasonably attributable to /usr-move. The most
recent practical issues I've seen was related to image building tools
such as live-build and grml. When it comes to lost files, we're not
addressing them based on user reports (as there are practically none),
but on rigid analysis preventing users from experiencing them in the
first place.

Helmut



Re: Any way to install packages+run autopkgtests on porterbox machines?

2024-03-01 Thread Jérôme Charaoui

Le 2024-03-01 à 08 h 09, David Bremner a écrit :

Nilesh Patra  writes:


When I want to fix autopkgtests for a package on a particular architecture, I 
currently
see no way to run autopkgtests before I dput since porter boxes do not provide 
root
access which autopkgtest needs.

Currently I am manually hacking around the test scripts and running the 
autopkgtests but
this does not emulate the autopkgtest environment well enough. It also does not 
work
well for daemon-like packages for instance.


Related, we wouldn't need to use the porterboxes if the
situation for running autopkgtests locally was better.

I have complained at length on IRC on the difficulties of running
autopkgtests locally on non-amd64 architectures. There is some tooling
to build images for e.g. the qemu backend, but in my experience it does
not work smoothly. I think the autopkgtest maintainers could use help
with improving this tooling.  Personally I am reluctant to add non-amd64
autopkgtests to packages with the current state of tooling. I do not
consider "upload and find out" an acceptable debugging strategy.


While struggling with this issue I did find out that sbuild-qemu-create 
can be useful to build non-amd64 qemu images that are useable for 
autopkgtests, although that only supports a small subset of architectures.


But I've thrown in the towel recently on jruby autopkgtests in part for 
this reason.




Re: [APT repo] Provide a binary-all/Packages only, no primary arch list

2024-03-01 Thread MichaIng

Hi David,

sorry for the late reply, and many thanks for you detailed answer.

What I mean with "falls back" is that APT downloads a package linked in 
the arch-specific index, if present, and the on from "all" index only if 
the package is not listed in the arch-specific one (or if there is no 
arch-specific one).


> So you configured apt on the user systems to support riscv64,
> but didn't change anything in the repository?

Not sure what you mean with "configured apt on the user systems to 
support riscv64"? I added the key and repo via sources.list.d to a 
riscv64 system, as that is all what it needs? Probably you mean 
/var/lib/dpkg/arch, which contains riscv64 OOTB, as this system was 
generated via debootstrap to generate a base Debian with this 
architecture explicitly.


On the repository, binary-all/Packages was added to enable riscv64 
system support, and since all Webmin packages declare themselves 
(correctly) as "Architecture: all", we though that it would be actually 
cleaner and correct to provide an "all" index only, and remove all 
arch-specific indexes from the repo. But then we faced this warning, 
which is the reason for the confusion, about how it is intended to be done.


Of course we can add binary-riscv64/Packages and the same for all other 
new architectures added to Debian in the future, to mute the warning. 
But having a large number of duplicate indexes which all contain/link 
the the very same "Architecture: all" packages seems somehow 
unnecessarily complicated, and even wrong for explicitly arch-agnostic 
packages.


> Yes, i.e. no and yes there is. The previously mentioned wiki says this
> about the Architectures field: "The field identifies which architectures
> are supported by this repository." So your repository doesn't support
> this architecture and doesn't even ship the data, the user has configured
> apt to get. Something is fishy, better warn about it.

I see. We were assuming that "all" implies that a repo supports all 
architectures, in case at least for the subset of packages listed in the 
"all" index. The wiki however does not say anything explicitly about 
that, but only that the "all" index is an indicator that "Architecture: 
all" packages are not listed in the arch-specific indexes.


I understood now that APT repos are intended to explicitly define every 
architecture they support, regardless whether they provide 
"Architecture: all" packages only or others as well. I tend to agree 
that being explicit is usually better, and in this case means that 
someone usually tested the package(s) on the particular architecture 
before the repo maintainer does the effort to add it to the repo explicitly.


Thanks for taking your time, so we know now that the intended way is to 
add binary-riscv64/Packages to declare riscv64 support explicitly.


Best regards,

Micha

Am 03.12.2023 um 15:27 schrieb David Kalnischkies:

Hi,

(I think this isn't a good mailing list for apt vs. third-party repo
  config… users@ probably could have answered that already, deity@ if
  you wanted full APT background which I will provide here now…
  reordered quotes slightly to tell a better story)

On Sat, Dec 02, 2023 at 06:40:33PM +0100, MichaIng wrote:

we recognised that APT falls back to/uses "binary-all/Packages"


APT doesn't fall back to it, its default behaviour to use them if
available and supported (← that word becomes important later on).

Debian repos actually opt out of it for Packages files:
https://wiki.debian.org/DebianRepository/Format#No-Support-for-Architecture-all



while checking how to best enable riscv64 support for Webmin's own APT
repository


And what did you do to the repository to enable riscv64 support?



but still complains with a warning if no "binary-riscv64/Packages"
is present and declared in the Release file of the APT repository:
---
W: Skipping acquire of configured file 'contrib/binary-riscv64/Packages' as
repository 'https://download.webmin.com/download/repository sarge InRelease'
does not seem to provide it (sources.list entry misspelt?)
---


So you configured apt on the user systems to support riscv64,
but didn't change anything in the repository?



Is this expected behaviour, i.e. is every repository expected to provide
dedicated package lists for every architecture, or is there a way to provide
an "all" architectures list only, without causing clients to throw warnings?


Yes, i.e. no and yes there is. The previously mentioned wiki says this
about the Architectures field: "The field identifies which architectures
are supported by this repository." So your repository doesn't support
this architecture and doesn't even ship the data, the user has configured
apt to get. Something is fishy, better warn about it.


So, add riscv64 to Architecture in Release and be done, except that
you should read the entire stanza as it will explain how a client will
behave with that information. It also explains the specialness of 'all'.
https://wiki.deb

Bug#1065211: ITP: ada-bar-codes -- Bar or QR code formatter for the Ada programming language

2024-03-01 Thread Nicolas Boulenguez
Package: wnpp
Severity: wishlist
Owner: Nicolas Boulenguez 
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name: ada-bar-codes
  Version : 002.20240219
  Upstream Contact: Gautier de Montmollin 
* URL : https://sourceforge.net/projects/ada-bar-codes/
* License : Expat-MIT
  Programming Lang: Ada
  Description : Bar or QR code formatter for the Ada programming language

This Ada library generates various bar or QR codes, on different
output formats such as PDF, SVG or bitmaps.

The package is ready, I intend to upload it to NEW after the gnat-13
transition.



Bug#1065218: ITP: assetfinder -- Find domains and subdomains related to a given domain

2024-03-01 Thread Josenilson Ferreira da Silva
Package: wnpp
Severity: wishlist
Owner: Josenilson Ferreira da Silva 
X-Debbugs-Cc: debian-devel@lists.debian.org, nilsonfsi...@hotmail.com

* Package name: assetfinder
  Version : 0.1.1
  Upstream Contact: Hudson 
* URL : https://github.com/tomnomnom/assetfinder
* License : MIT/X
  Programming Lang: go-lang
  Description : Find domains and subdomains related to a given domain

 assetfinder is a command-line tool designed to find domains and
 subdomains associated with a specific domain.
 .
 The main objective of the tool is to help security researchers and IT
 professionals discover and understand how the domains and sub-domains
 of a given organization are distributed, trying to find possible
 security flaws and vulnerabilities.
 .
 assetfinder uses multiple data sources to perform its research, including:
  - crt.sh
  - certspotter
  - hackertarget
  - threatcrowd
  - Wayback Machine
  - dns.bufferover.run
  - Facebook Graph API
  - Virustotal
  - findsubdomains
 This expands coverage and increases the accuracy of results.



hardinfo rebooted as hardinfo2 - community edition - RELEASE 2.0.12

2024-03-01 Thread hwspeedy

Hi Simon Quigley, Debian Maintainer, (CC: debian-devel list)

We are proud to suggest new hardinfo2 release 2.0.12 to be included in 
debian repositories - test on debian 7 - SID(13).


https://github.com/hardinfo2/hardinfo2/releases/tag/release-2.0.12

PS: My first release, so please guide me, thanx.


Best Regards

hardinfo2

*Søren B. Kristensen*
hwspeedy
hardinfo2 Community Maintainer

GitHub - https://github.com/hardinfo2/hardinfo2
Project www - https://hardinfo2.org

On 19/02/2024 01:05, hwspeedy wrote:


Hi Simon Quigley, Debian Maintainer,

We are rebooting the very nice app called hardinfo as hardinfo2.
- There is no other app like this one, so it deserves a space.

GitHub - https://github.com/hardinfo2/hardinfo2
Project www - https://hardinfo2.org

*Reason for name change:*
- the domain is used for private stuff
- the hardinfo name is unavailable for community

The package is still called hardinfo to allow for upgrading - but 
everything else is changed to hardinfo2 for the program name, github 
name, homepage, etc.



*We would like a little help upgrading the app in the Debian, if that 
is possible.*


Please check out this prerelease - if you could provide feedback for 
the package building system, thanx.


https://github.com/hardinfo2/hardinfo2/releases/tag/release-2.0.5pre

In the past every distro released this package with different compiler 
settings and dependencies and optional apps, resulting in poor user 
experience.
- So putting the packaging into the project should give a common 
ground for developers and distro maintainers.
- And due to no release for more than 10 years, a lot of users are 
also going to the source code in GitHub - a package is a better way of 
helping them.


*
Changes:*
On the prerelease page is also an overview of changes for hardinfo2 
versus hardinfo in Debian.



*Status:
*The contributions have almost vanished to zero due to no releases  - 
but we hope to get the community active now with a new Debian and 
Redhat release.
I am working with lpereira, the original app creater, who is busy, but 
has agreed to be my backup in case anything happens.

Please include her in all responses, thanx.

*_We still have some work before a release_*, but would like your 
feedback on the build/packaging system, etc.


In advance thanx,


--

Best Regards

hardinfo2

*Søren B. Kristensen*
hwspeedy
hardinfo2 Community Maintainer

GitHub - https://github.com/hardinfo2/hardinfo2
Project www - https://hardinfo2.org

Re: Any way to install packages+run autopkgtests on porterbox machines?

2024-03-01 Thread Johannes Schauer Marin Rodrigues
Hi,

Quoting David Bremner (2024-03-01 14:09:36)
> Nilesh Patra  writes:
> > When I want to fix autopkgtests for a package on a particular architecture,
> > I currently see no way to run autopkgtests before I dput since porter boxes
> > do not provide root access which autopkgtest needs.
> >
> > Currently I am manually hacking around the test scripts and running the 
> > autopkgtests but
> > this does not emulate the autopkgtest environment well enough. It also does 
> > not work
> > well for daemon-like packages for instance.
> Related, we wouldn't need to use the porterboxes if the situation for running
> autopkgtests locally was better.

I disagree. I think even with perfect autopkgtest support I'd want porter boxes
because:

 a) emulation is slow

 b) there are some bugs for which you want the real hardware to reproduce them

 c) some arches are a pain or impossible to emulate

> I have complained at length on IRC on the difficulties of running
> autopkgtests locally on non-amd64 architectures.

There surely are rough edges. I recently got [gic-version] fixed to run
autopkgtest with qemu on arm64. Are there more bugs on other non-amd64
architectures?

[gic-version] https://salsa.debian.org/ci-team/autopkgtest/-/merge_requests/239

> There is some tooling to build images for e.g. the qemu backend, but in my
> experience it does not work smoothly. I think the autopkgtest maintainers
> could use help with improving this tooling.

Agreed. The tooling around creating and running qemu images with the
autopkgtest qemu backend is not ideal. I'm personally interested in having this
work so I'd love to hear about the bugs you found related to making autopkgtest
support running inside qemu virtual machines. There currently exist several
attempts to improve the status quo. One issue with emulation support is making
the bootloader work. If the thing you want to test does not care about the
bootloader, then using Helmut's debvm might be what you want which is currently
being integrated into autopkgtest via [ssh-setup] by Jochen Sprickerhof. I was
not happy with the autopkgtest-build-qemu script which contains limitations
directly connected with its use of vmdb2 so I rolled my own and called it
mmdebstrap-autopkgtest-build-qemu which is available since mmdebstrap 1.4.0. It
should work for all architectures that have EFI support.  If you need to do
something on ppc64el I have another script which uses grub ieee1275 instead of
EFI booting. I have not managed to get mips to boot.

[ssh-setup] https://salsa.debian.org/ci-team/autopkgtest/-/merge_requests/237

> Personally I am reluctant to add non-amd64 autopkgtests to packages with the
> current state of tooling. I do not consider "upload and find out" an
> acceptable debugging strategy.

Personally, I found the bigger blocker for autopkgtest not the architecture
support but the use of docker or lxc. For my packages, I had to repeatedly use
the "upload and find out" strategy because most of my autopktest problems are
related to my code working on bare-metal or inside qemu and failing when run
inside one of the container mechanisms used by salsaci or debci.

Do you have some concrete issues in mind that prevent you from running
autopkgtest on architecture X? In case your code does not care about full
machine emulation or the kernel, or the bootloader, maybe all you want is a
foreign architecture chroot where your code can run with qemu user mode
emulation and then you skip all the quirks and bugs related to running full
qemu machines?

Thanks!

cheers, josch

signature.asc
Description: signature


dpkg --verify not helpful?

2024-03-01 Thread Andreas Metzler
Hello,

iirc it was recently proposed to add a suggestion to run dpkg --verify
to the trixie upgrade notes to find missing files due to the usr-merge
transition. (Cannot find the reference right now).

However I just had file loss (due to libuuid changing its name to t64
and back again) and dpkg --verify (and --audit) is happy:
(sid)ametzler@argenau:/tmp$ ldd -r /usr/lib/x86_64-linux-gnu/libSM.so.6 | grep 
not
libuuid.so.1 => not found
(sid)ametzler@argenau:/tmp$ dpkg -s libuuid1
Package: libuuid1
Status: install ok installed
[...]
Version: 2.39.3-7
(sid)ametzler@argenau:/tmp$ dpkg --verify libuuid1 && echo success
success
(sid)ametzler@argenau:/tmp$ dpkg -L libuuid1 | grep /lib/
/usr/lib/x86_64-linux-gnu
(sid)ametzler@argenau:/tmp$ grep /lib/ /var/lib/dpkg/info/libuui
d1\:amd64.list
/usr/lib/x86_64-linux-gnu
(sid)ametzler@argenau:/tmp$ dpkg --contents 
/var/cache/apt/archives/libuuid1_2.39.3-7_amd64.deb |  grep '\.so'
-rw-r--r-- root/root 34872 2024-03-01 10:20 
./usr/lib/x86_64-linux-gnu/libuuid.so.1.3.0
lrwxrwxrwx root/root 0 2024-03-01 10:20 
./usr/lib/x86_64-linux-gnu/libuuid.so.1 -> libuuid.so.1.3.0

cu Andreas
-- 
`What a good friend you are to him, Dr. Maturin. His other friends are
so grateful to you.'
`I sew his ears on from time to time, sure'



Re: dpkg --verify not helpful?

2024-03-01 Thread Sven Joachim
On 2024-03-02 08:01 +0100, Andreas Metzler wrote:

> iirc it was recently proposed to add a suggestion to run dpkg --verify
> to the trixie upgrade notes to find missing files due to the usr-merge
> transition. (Cannot find the reference right now).
>
> However I just had file loss (due to libuuid changing its name to t64
> and back again) and dpkg --verify (and --audit) is happy:
> (sid)ametzler@argenau:/tmp$ ldd -r /usr/lib/x86_64-linux-gnu/libSM.so.6 | 
> grep not
> libuuid.so.1 => not found
> (sid)ametzler@argenau:/tmp$ dpkg -s libuuid1
> Package: libuuid1
> Status: install ok installed
> [...]
> Version: 2.39.3-7
> (sid)ametzler@argenau:/tmp$ dpkg --verify libuuid1 && echo success
> success
> (sid)ametzler@argenau:/tmp$ dpkg -L libuuid1 | grep /lib/
> /usr/lib/x86_64-linux-gnu
> (sid)ametzler@argenau:/tmp$ grep /lib/ /var/lib/dpkg/info/libuui
> d1\:amd64.list
> /usr/lib/x86_64-linux-gnu
> (sid)ametzler@argenau:/tmp$ dpkg --contents 
> /var/cache/apt/archives/libuuid1_2.39.3-7_amd64.deb |  grep '\.so'
> -rw-r--r-- root/root 34872 2024-03-01 10:20 
> ./usr/lib/x86_64-linux-gnu/libuuid.so.1.3.0
> lrwxrwxrwx root/root 0 2024-03-01 10:20 
> ./usr/lib/x86_64-linux-gnu/libuuid.so.1 -> libuuid.so.1.3.0

The libuuid1t64 package has an unversioned Replaces: on libuuid1, so the
missing files belong to it.  If you remove libuuid1t64, the files are
gone and libuuid1 needs to be reinstalled.

The libuuid1 package should do something to prevent that file loss,
e.g. declaring its own Replaces+Conflicts on libuuid1t64.

Cheers,
   Sven