Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread James Addison
On Tue, 16 May 2023 at 04:22, Russ Allbery  wrote:
>
> > It did look like a veto to me. More importantly, isn't relying on
> > passersby to spot alleged harmful changes dangerous, especially for
> > undocumented, uncodified and untested use cases, like unspecified and
> > vague cross-compatibility requirements?
>
> I'm honestly boggled.  This is a thread on debian-devel, which is
> literally how we do architecture vetting in this project.
>
> I absolutely do not think that we can write down everything of importance
> in designing a distribution so that we don't have to have conversations
> with other people in the project who have deep expertise when considering
> a significant architectural change like changing the PT_INTERP path of
> core binaries.

We've almost certainly all encountered limitations in upstream
specifications and wondered when it's worth attempting a perceived
improvement despite potential friction.

If Debian did want/need to change the PT_INTERP path, is there a way
to achieve that in both a standards-compliant and also a
distro-compatible way?

And if there isn't: how would we resolve that?


This conversation is already lengthy, and based on recent experience
I'm definitely the kind of person who doesn't always read all the
details - so I don't know whether it's a good idea for anyone to
respond to my message.  But I haven't seen anyone discussing or
providing a safe migration path.

Although tests may not exist publicly in this case, the idea of using
tests where possible to catch regressions seems relevant to me.  Tests
can help people to identify when a compatibility boundary has been
reached or overstepped, and, socially, they can provide a clear place
to gather discussion if & when they become outdated (particularly if
the tests are themselves are provided free and open source).  Copying
binaries and running them seems like a form of testing, but perhaps we
could find better ways.



Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread Simon McVittie
On Tue, 16 May 2023 at 02:50:48 +0100, Luca Boccassi wrote:
> This sounds like a very interesting use case, and the first real one
> mentioned, which is great to see - but I do not fully follow yet, from
> what you are saying it seems to me that what you need is for your
> binaries to use the usual pt_interp, that bit is clear. But why does
> it matter if /usr/bin/ls on the host uses a different one?

We don't need to run the ls from the host, but we do need to run
glibc-related executables like ldconfig and localedef from either the
host or the container runtime, whichever is newer. Because glibc is
a single source package, executables and libraries within the glibc
bubble sometimes make use of private symbols in libraries that are also
within the glibc bubble (and IMO they have a right to do so), even though
executables from outside glibc would be discouraged or disallowed from
doing so. This means that when we have chosen a particular version of
glibc (which, again, must be whichever one is newer), we try to use its
matching version for *everything* originating in the glibc source package.

In principle we could get exactly the same situation if we've imported a
library from the host system (as a dependency of the graphics stack) that
calls an executable as a subprocess and expects it to be >= the version
it was compiled for - hopefully not (/usr)/bin/ls, but potentially others.

The wider point than my specific use-case, though, is that when there's a
standard, you can't predict what other software authors have looked at the
statement "you can rely on this" and ... relied on it. See also Russ's
(as ever, excellent) mails to the same thread.

I appreciate that you are trying to explore the edges of the
problem/constraint space and say "what if we did this, could that work?",
and it's good that you are doing that; but part of that process is
working with the other people on this list when they say "no, we can't
do that because...", and respecting their input.

Thanks,
smcv



Bug#1036160: ITP: sudo-rs -- A safety oriented and memory safe implementation of sudo and su written in Rust

2023-05-16 Thread Sylvestre Ledru
Package: wnpp
Severity: wishlist
Owner: Sylvestre Ledru 
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name: sudo-rs
* URL : https://github.com/memorysafety/sudo-rs
* License : Apache-2.0, MIT
  Programming Lang: Rust
  Description : A safety oriented and memory safe implementation of sudo 
and su written in Rust



Bug#1036165: ITP: atuin -- Rich shell history using a SQLite database with optional encrypted sync

2023-05-16 Thread Blair Noctis
Package: wnpp
Severity: wishlist
Owner: Blair Noctis 
X-Debbugs-Cc: debian-devel@lists.debian.org, n...@sail.ng

* Package name: atuin
  Version : 14.0.1
  Upstream Contact: Ellie Huxtable 
* URL : https://atuin.sh/
* License : MIT
  Programming Lang: Rust
  Description : Rich shell history replacement using SQLite with optional
encrypted sync server

Atuin replaces your existing shell history with a SQLite database, and records
additional context for your commands. Additionally, it provides optional and
fully encrypted synchronisation of your history between machines, via an Atuin
server (using the same binary).

This is a Rust package and fits in the Rust team process.

-- 
Sdrager,
Blair Noctis


OpenPGP_signature
Description: OpenPGP digital signature


Bug#1036169: ITP: jl -- Pretty Viewer for JSON logs

2023-05-16 Thread Andrej Shadura
Package: wnpp
Severity: wishlist
Owner: Andrej Shadura 

* Package name: jl
  Version : 0.1.0-1
  Upstream Author : Yunchi Luo
* URL : https://github.com/mightyguava/jl
* License : Expat
  Programming Lang: Go
  Description : Pretty Viewer for JSON logs

 jl (JL) is a parser and formatter for JSON logs, making machine-readable
 JSON logs human readable again.
 .
 jl currently supports 2 formatters, with plans to make the formatters
 customizable.
 .
 The default is -format compact, which extracts only important fields from
 the JSON log, like message, timestamp, level, colorizes and presents
 them in a easy to skim way. It drops un-recongized fields from the logs.
 .
 The other option is -format logfmt, which formats the JSON logs in a way
 that closely resembles logfmt (https://blog.codeship.com/logfmt-a-log-
 format-thats-easy-to-read-and-write/). This option will emit all fields from
 each log line.

Packager’s comment:

I’m not sure which binary name should I choose, since jl is a bit too short,
and provides an opportunity for a future name collision. Please comment and
let me know if you have a better idea.

I think in any case it makes sense to name the source package
golang-github-mightyguava-jl.

-- 
Cheers,
  Andrej



Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread Russ Allbery
James Addison  writes:

> We've almost certainly all encountered limitations in upstream
> specifications and wondered when it's worth attempting a perceived
> improvement despite potential friction.

> If Debian did want/need to change the PT_INTERP path, is there a way to
> achieve that in both a standards-compliant and also a distro-compatible
> way?

My recollection is that we considered this when starting with multiarch
but none of the other distributions outside of Ubuntu were very enthused,
so we dropped it.  I saw those discussions on the glibc development lists,
which is not really the place for detailed x86_64 ABI discussion but which
would certainly be an interested party and has roughly the right people
present.  If I saw a problem that would need to be addressed with an ABI
change and didn't have anyone else to ask, that's where I personally would
start, but the Debian gcc and glibc maintainers would probably know more.

The bar for justifying a change will be fairly high, based on past
discussions.  I would expect it to require some sort of significant bug
that can't be worked around in another way.

> Although tests may not exist publicly in this case, the idea of using
> tests where possible to catch regressions seems relevant to me.  Tests
> can help people to identify when a compatibility boundary has been
> reached or overstepped, and, socially, they can provide a clear place to
> gather discussion if & when they become outdated (particularly if the
> tests are themselves are provided free and open source).  Copying
> binaries and running them seems like a form of testing, but perhaps we
> could find better ways.

I don't know if anyone has written an ABI compliance test for binaries.
That sounds like something that would be in scope for the Linux Test
Project, though, and it's possible their existing tests do some of this.

-- 
Russ Allbery (r...@debian.org)  



Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread Didier 'OdyX' Raboud
Le mardi, 16 mai 2023, 17.06:38 h CEST Russ Allbery a écrit :
> I don't know if anyone has written an ABI compliance test for binaries.
> That sounds like something that would be in scope for the Linux Test
> Project, though, and it's possible their existing tests do some of this.

This has existed in a (now distant) past as the "Linux Distribution Checker", 
in the context of the Linux Standard Base, that Debian and Ubuntu stopped 
caring about in late 2015.

I'm not aware of more recent efforts in that direction; but it's an 
understatement to say the landscape has changed quite a bit since: containers, 
sandbox environments (and others) have forever changed the way we think about 
distributing binary executable. LSB had that ambition, and failed.

-- 
OdyX

signature.asc
Description: This is a digitally signed message part.


Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread Russ Allbery
Didier 'OdyX' Raboud  writes:

> This has existed in a (now distant) past as the "Linux Distribution
> Checker", in the context of the Linux Standard Base, that Debian and
> Ubuntu stopped caring about in late 2015.

Ah, yes, thank you, that makes sense.

> I'm not aware of more recent efforts in that direction; but it's an
> understatement to say the landscape has changed quite a bit since:
> containers, sandbox environments (and others) have forever changed the
> way we think about distributing binary executable. LSB had that
> ambition, and failed.

While that is certainly true, I feel like the pendulum may be swinging
back in a slightly different way with Go and Rust popularizing the idea
that you should be able to copy around a binary and run it on any Linux
system with a compatible architecture.  This is a much smaller problem
than LSB was trying to solve since LSB was trying to standardize things
like the shared library ABI and SONAMEs, which Go and Rust intentionally
avoid with static linking.  But they do rely very deeply on every system
being able to execute binaries built to the Linux ABI and glibc.  (I
realize that's a different question than the one discussed in this
thread.)

-- 
Russ Allbery (r...@debian.org)  



Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread Luca Boccassi
On Tue, 16 May 2023 at 04:22, Russ Allbery  wrote:
> Luca Boccassi  writes:
> > On Mon, 15 May 2023 at 16:18, Russ Allbery  wrote:
>
> >> Note that we're not talking about complicated packages with lots of
> >> runtime like, say, Emacs.  As I understand it your proposal wouldn't
> >> change PT_INTERP for that binary anyway.  We're presumably talking
> >> about the kind of binaries that you need to bootstrap a minimal system,
> >> so packages like coreutils or bash.  And I would indeed expect those
> >> binaries to be generally portable, as long as the same critical shared
> >> libraries are available on other systems (in this case, PCRE2 and
> >> ncurses).
>
> > Is that really the case? Let's test that hypothesis:
>
> I think you may have not read my whole message before replying, and also
> please assume that I know really basic facts about glibc compatibility and
> am not referring to that.
>
> I said "of a similar vintage" (farther down in my message) because of
> course we all know that binaries built against newer versions of glibc
> don't run on systems with older versions of glibc (and likewise for shared
> libraries in general and thus libselinux), and you tested a Debian
> unstable package on an Ubuntu system from 2020.  This says nothing
> interesting and has nothing to do with my point.

It does say something interesting. When we started, the assertion was
that packages not relying on the symlink being present was fundamental
for portability and cross-compatibility. Then, it shrinked to
portability and cross-compatibility of a subset of packages. Now it
further shrank to portability and cross-compatibility of a subset of
packages of a subset of vintages.
Why is the requirement that libselinux and glibc are not too recent
fine to have on coreutils, but the requirement that there's a symlink
is out of the question? If anything, the latter is trivial to add, so
the formers should feel more "disruptive" and game-breaking, no?

> > Whoops. Does this make coreutils rc-buggy now? ;-)
>
> You are the only person who is talking about RC bugs.  The bar here is not
> "prove to me that this is RC buggy," the bar is "you have to prove to a
> bunch of Debian core maintainers that they should break the ABI in their
> packages" (not to mention the additional small but permanent build
> complexity).  Demanding they prove to you that it's a bad idea is not how
> this works.

It's a tongue-in-cheek comment, I had hoped the emojii would give that away.

> The point of standards like an ABI is that a bunch of entirely unrelated
> people who never talk to each other and never look at each other's
> software are allowed to rely on them and assume no one else will break
> them.  This is how free software scales; without invariants that everyone
> can rely on without having to explain how they're relying on them, it is
> much more difficult to get an ecosystem to work together.  We don't just
> break things because they don't seem important; the space of people who
> may be relying on this standard is unknowable, which is the entire point.
> Opening those boxes is really expensive (in time, planning, communication,
> debugging, and yes, compatibility) and we should only do it when it
> really, really matters.

But does it really work, or do we just hope it does? I personally see
a great deal of signals that say it doesn't. For one, in 2023, "build
one binary and run it everywhere" doesn't really seem to me to be the
principal choice in terms of portability. It might have been once, and
there might be a number of relevant use cases still, but it seems to
me the industry has largely gone in a very different direction
nowadays.

> > It did look like a veto to me. More importantly, isn't relying on
> > passersby to spot alleged harmful changes dangerous, especially for
> > undocumented, uncodified and untested use cases, like unspecified and
> > vague cross-compatibility requirements?
>
> I'm honestly boggled.  This is a thread on debian-devel, which is
> literally how we do architecture vetting in this project.
>
> I absolutely do not think that we can write down everything of importance
> in designing a distribution so that we don't have to have conversations
> with other people in the project who have deep expertise when considering
> a significant architectural change like changing the PT_INTERP path of
> core binaries.

Not everything of course, but wouldn't it be worth it to specify core
items such as cross-compatibility requirements? Not in terms of
implementation details, but in terms of what the goals and the minimum
expectations are. There are obviously some, and they are obviously of
great importance to many, and yet to me it feels like they shift every
time I ask a different question.

> >> I mostly jumped in because it felt like you and Steve were just yelling
> >> at each other and I thought I might be able to explain some of where he
> >> was coming from in a way that may make more sense.
>
> > I don't believ

Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread Luca Boccassi
On Tue, 16 May 2023 at 09:27, Simon McVittie  wrote:
>
> On Tue, 16 May 2023 at 02:50:48 +0100, Luca Boccassi wrote:
> > This sounds like a very interesting use case, and the first real one
> > mentioned, which is great to see - but I do not fully follow yet, from
> > what you are saying it seems to me that what you need is for your
> > binaries to use the usual pt_interp, that bit is clear. But why does
> > it matter if /usr/bin/ls on the host uses a different one?
>
> We don't need to run the ls from the host, but we do need to run
> glibc-related executables like ldconfig and localedef from either the
> host or the container runtime, whichever is newer. Because glibc is
> a single source package, executables and libraries within the glibc
> bubble sometimes make use of private symbols in libraries that are also
> within the glibc bubble (and IMO they have a right to do so), even though
> executables from outside glibc would be discouraged or disallowed from
> doing so. This means that when we have chosen a particular version of
> glibc (which, again, must be whichever one is newer), we try to use its
> matching version for *everything* originating in the glibc source package.
>
> In principle we could get exactly the same situation if we've imported a
> library from the host system (as a dependency of the graphics stack) that
> calls an executable as a subprocess and expects it to be >= the version
> it was compiled for - hopefully not (/usr)/bin/ls, but potentially others.

Thanks for the clarification, so if I understood correctly, your use
case is that sometimes (eg: when they are newer) you pull binaries
(eg: ldconfig) from the host, and run them from the container? So, in
case let's say ldconfig on the host points to /usr/lib/ld, but because
your container is not usr-merged, it wouldn't find the interpreter and
fail?

> The wider point than my specific use-case, though, is that when there's a
> standard, you can't predict what other software authors have looked at the
> statement "you can rely on this" and ... relied on it. See also Russ's
> (as ever, excellent) mails to the same thread.
>
> I appreciate that you are trying to explore the edges of the
> problem/constraint space and say "what if we did this, could that work?",
> and it's good that you are doing that; but part of that process is
> working with the other people on this list when they say "no, we can't
> do that because...", and respecting their input.

I respect and appreciate the input, but I want to understand it too,
hence the "because..." part is what I was looking for - so thanks for
providing it, it is really useful.

Kind regards,
Luca Boccassi



Re: Bug#1035904: dpkg currently warning about merged-usr systems (revisited)

2023-05-16 Thread Russ Allbery
Luca Boccassi  writes:

> It does say something interesting. When we started, the assertion was
> that packages not relying on the symlink being present was fundamental
> for portability and cross-compatibility. Then, it shrinked to
> portability and cross-compatibility of a subset of packages. Now it
> further shrank to portability and cross-compatibility of a subset of
> packages of a subset of vintages.

I think it's pretty clear that I'm totally failing to communicate what I
was getting at with this example and the process is becoming incredibly
frustrating, so I'm going to stop trying here.

> But does it really work, or do we just hope it does? I personally see a
> great deal of signals that say it doesn't. For one, in 2023, "build one
> binary and run it everywhere" doesn't really seem to me to be the
> principal choice in terms of portability.

Well, believe what you believe, but I literally do that daily, as does
anyone else who regularly uses software from a Rust or Go ecosystem.  Not
a single work day goes by without me running, on some random Ubuntu or Red
Hat or Debian system, binaries that were compiled on some random other
Linux distribution (often I have no idea which one).

> It might have been once, and there might be a number of relevant use
> cases still, but it seems to me the industry has largely gone in a very
> different direction nowadays.

I do think the industry is moving away (well, has already moved away) from
Linux Standards Base pre-compiled C binaries without wrappers like snap or
flatpak, although there are some very notable exceptions, such as Steam's
special use cases explained elsewhere on the thread.  I don't believe the
industry is moving away from Go and Rust binaries, so this is, I think, a
more complicated picture than your description indicates.

> Not everything of course, but wouldn't it be worth it to specify core
> items such as cross-compatibility requirements? Not in terms of
> implementation details, but in terms of what the goals and the minimum
> expectations are. There are obviously some, and they are obviously of
> great importance to many, and yet to me it feels like they shift every
> time I ask a different question.

I am certainly never going to say no to more maintained documentation.
That would be great.  If you're feeling motivated and you're willing to
really listen to what people are saying and accept and understand a lot of
ambiguity and a lot of assumptions that don't match your assumptions, go
for it!

I personally am not even doing justice to Policy in its existing scope, so
this isn't something I will personally be tackling.  Honestly, I should
have spent all of the time I spent on this thread working on Policy
instead.  :)

> But if you prefer to focus on first principles, that's great! I've been
> asking this a lot: is cross-distribution harmonization a core
> requirement? Is it important enough to trump other goals and
> distro-local benefits? If so, isn't it worth to discuss it and then
> pencil it down?

I think some parts of it are a core requirement.  It would be very
surprising, and very bad, if we couldn't run Go and Rust binaries built on
another distribution, for example, or if Go or Rust binaries built on
Debian couldn't be used on other distributions, both subject to the normal
constraints of glibc cross-compatibility that everyone building binaries
already knows about.  I think other parts of it are not a core
requirement, but still something that is nice to have and that we
shouldn't break without a really good reason.

I think the specific details of the Linux Standards Base process mostly
didn't turn into something the Linux world wanted to support going
forward, and thus LSB harmonization, while an interesting idea, is no
longer a requirement in general.  However, we still follow some pieces of
it that were properly implemented (like the FHS), and while we shouldn't
do that blindly forever (if for no other reason than the FHS is no longer
maintained), it's also valuable to not change that too fast and to only
break compatibility with now-widely-expected file system layout properties
when we have a really good reason.  Ideally, we would pick some smaller
subset of the LSB that still matters and agree with other major
distributions on some points of compatibility to, at the very least, help
ease the common problem of needing to administer multiple systems running
different Linux distributions.

There is no one answer to whether this trumps other goals and distro-local
benefits.  It depends on what those benefits are and what those goals are
and how important they are.  For Guix, obviously their immutable tree and
hard dependency pinning are more important to them than cross-distro
compatibility, and given their goals, that seems like an entirely
reasonable decision.  I would disagree vehemently with that decision for
Debian because Debian is not Guix.

In other words, it depends.

-- 
Russ Allbery (r...@debian.org)  

64-bit time_t transition for 32-bit archs: a proposal

2023-05-16 Thread Steve Langasek
Hi folks,

Over on debian-arm@lists, there has been discussion on and off for several
months now about the impending 32-bit timepocalypse.  As many of you are
aware, 32-bit time_t runs out of space in 2038; the exact date is now less
than 15 years away.  It is not too early to start addressing the question of
32-bit architecture compatibility, as there are already reports in the wild
of calendar failures for future events on 32-bit archs.

While it’s debatable whether most of the 32-bit archs in Debian (and as
unofficial ports) will be in use long enough to worry about 2038, there’s at
least substantial reason to believe that 32-bit ARM (armhf and possibly
armel) will still be in use 15 years from now.  Solving this problem has
already been proposed as a release goal for trixie:

https://wiki.debian.org/ReleaseGoals/64bit-time

For those who prefer a primer in video form, Wookey gave a talk at FOSDEM
about this: 

https://fosdem.org/2023/schedule/event/fixing_2038/ 


There are two basic ways to solve this.  Either we can rebootstrap the ports
that we want to keep; this makes upgrades unreliable, and doesn’t help any
ports for which we don’t do this work.  Or we can do library transitions for
the set of all libraries that reference time_t in their ABI; this is a bit
more work for everyone in the short term because it will impact testing
migration across all archs rather than being isolated to a single port, but
it’s on the order of 500 library packages which is comparable to other ABI
transitions we’ve done in the past (e.g.  ldbl
https://lists.debian.org/debian-devel/2007/05/msg01173.html).

The difficulty is, unlike the ldbl or c2 transitions of the past, time_t ABI
compatibility can’t be worked out by static analysis of the exposed symbols,
only by traversing the headers and mapping them to the library ABI.  So when
I say “on the order of 500 packages”, this is because at the moment we have
about 1900 -dev packages that have failed to be analyzed because their
headers don’t compile out of the box.  I am currently working through
getting these all analyzed, prioritized by number of reverse-dependencies,
but this process will take at least a couple of months before we have a
complete list of libraries to be transitioned.  Help improving the scripts
at https://salsa.debian.org/vorlon/armhf-time_t/ to complete this analysis
is welcome.

Based on the analysis to date, we can say there is a lower bound of ~4900
source packages which will need to be rebuilt for the transition, and an
upper bound of ~6200.  I believe this is a manageable transition, and
propose that we proceed with it at the start of the trixie release cycle.

=== Technical details ===

The proposed implementation of this transition is as follows:

* Update dpkg-buildflags to emit -D_FILE_OFFSET_BITS=64 and -D_TIME_BITS=64
  by default on 32-bit archs.  (Note that this enables LFS support, because
  glibc’s 64-bit time_t implementation is only available with LFS also
  turned on, to avoid a combinatorial explosion of entry points.)

* … but NOT on i386.  Because i386 as an architecture is primarily of
  interest for running legacy binaries which cannot be rebuilt against a new
  ABI, changing the ABI on i386 would be counterproductive, as mentioned in
  https://wiki.debian.org/ReleaseGoals/64bit-time.

* For a small number of packages (~80) whose ABI is not sensitive to time_t
  but IS sensitive to LFS, along with their reverse-dependencies, filter out
  the above buildflags with DEB_BUILD_MAINT_OPTIONS=future=-lfs[1]. 
  Maintainers may choose to introduce an ABI transition for LFS, but as this
  is not required for time_t, we should not force it as part of *this*
  transition.  If there is a package that depends on both a time_t-sensitive
  library and an LFS-sensitive but time_t-insensitive library, however, then
  the LFS library will need to transition.  

* Largely via NMU, add a “t64” suffix to the name of runtime library
  packages whose ABI changes on rebuild with the above flags.  If an
  affected library already has a different suffix (c102, c2, ldbl, g…), drop
  it at this time.

* In order to not unnecessarily break compatibility with third-party (or
  obsolete) packages on architectures where the ABI is not actually
  changing, on 64-bit archs + i386, emit a Provides/Replaces/Breaks of the
  old library package name.  A sample implementation of the necessary
  packaging changes is at
  https://salsa.debian.org/vorlon/armhf-time_t/-/blob/main/time-t-me.

* Once the renamed library packages have been built on all archs and
  accepted through binary NEW, issue binNMUs of the reverse-dependencies
  across *all* architectures, to ensure that users get upgraded to the
  current runtime library package and aren’t left with stale packages under
  the old name on upgrade.

* In the future when the upstream SONAME changes, the t64 suffix should be
  dropped.

Your thoughts?

Thanks,
-- 
Steve Langasek   Give me a lev

Re: 64-bit time_t transition for 32-bit archs: a proposal

2023-05-16 Thread Russ Allbery
Steve Langasek  writes:

> * Largely via NMU, add a “t64” suffix to the name of runtime library
>   packages whose ABI changes on rebuild with the above flags.  If an
>   affected library already has a different suffix (c102, c2, ldbl, g…), drop
>   it at this time.

This is possibly me being too fiddly (and also I'm not 100% sure that I'll
get to this in time), but ideally I'd like to do an upstream SONAME
transition for one of my shared libraries (and probably will go ahead and
change it for i386 as well, since I'm dubious of the need to run old
binaries with new libraries in this specific case).

What's the best way for me to do that such that I won't interfere with the
more automated general transition?  Will you somehow automatically detect
packages that have already been transitioned?  Or should I wait until the
package has been automatically transitioned and *then* do an upstream
transition?

> Your thoughts?

The one additional wrinkle is that there are packages that, due to
historical error or unfortunate design choices, have on-disk data files
that also encode the width of time_t.  (I know of inn2, which is partly my
fault, but presumably there are more.)  Rebuilding that package with the
64-bit time_t flags would essentially introduce an RC bug (data loss)
because it will treat its existing data files as corrupt.  Do you have any
idea how to deal with this case?

(The LFS transition was kind of a mess and essentially required users to
do manual data migration.  This time around, maybe we'll manage to write a
conversion program in time.)

-- 
Russ Allbery (r...@debian.org)  



Re: 64-bit time_t transition for 32-bit archs: a proposal

2023-05-16 Thread YunQiang Su
For mipsel, we have one more thing to do:
- NaN2008 vs NaN legacy
So I'd prefer rebootstrap (only for mipsel).
And In fact we did it: https://repo.oss.cipunited.com/debian/

Russ Allbery  于2023年5月17日周三 12:31写道:
>
> Steve Langasek  writes:
>
> > * Largely via NMU, add a “t64” suffix to the name of runtime library
> >   packages whose ABI changes on rebuild with the above flags.  If an
> >   affected library already has a different suffix (c102, c2, ldbl, g…), drop
> >   it at this time.
>
> This is possibly me being too fiddly (and also I'm not 100% sure that I'll
> get to this in time), but ideally I'd like to do an upstream SONAME
> transition for one of my shared libraries (and probably will go ahead and
> change it for i386 as well, since I'm dubious of the need to run old
> binaries with new libraries in this specific case).
>
> What's the best way for me to do that such that I won't interfere with the
> more automated general transition?  Will you somehow automatically detect
> packages that have already been transitioned?  Or should I wait until the
> package has been automatically transitioned and *then* do an upstream
> transition?
>
> > Your thoughts?
>
> The one additional wrinkle is that there are packages that, due to
> historical error or unfortunate design choices, have on-disk data files
> that also encode the width of time_t.  (I know of inn2, which is partly my
> fault, but presumably there are more.)  Rebuilding that package with the
> 64-bit time_t flags would essentially introduce an RC bug (data loss)
> because it will treat its existing data files as corrupt.  Do you have any
> idea how to deal with this case?
>
> (The LFS transition was kind of a mess and essentially required users to
> do manual data migration.  This time around, maybe we'll manage to write a
> conversion program in time.)
>

Since there may be some unknown problems, we cannot tell our user that
they can upgrade smoothly, no matter rebootstrap or rebuilding some packages.

So I guess, rebootstrap may be a better choice, at least for users to
understand what we did.

> --
> Russ Allbery (r...@debian.org)  
>


-- 
YunQiang Su